DBA Blogs

Inject variable value

Tom Kyte - Fri, 2021-06-11 17:06
Hi guys. I have a piece of code like this: <code> declare myvar number := 4; begin select rule into v_stmt from rules where rule_id=1; --v_stmt -> 'begin :a := my_function(my_var, 1), 1); end;' execute immediate v_stmt; end; / </code> Is there a way to inject my_var value to execute v_stmt?
Categories: DBA Blogs

SQL Loader header row attribute cannot be applied to detail rows in event of multiple header/detail transactions

Tom Kyte - Fri, 2021-06-11 17:06
How to load using Oracle SQL Loader values to it's related detail rows if the file has multiple headers and details rows. The issue is header row attribute cannot be applied to it's related detail rows in event of multiple header/detail transactions. It does works and load for 1 header, where header loader evaluation is done for very first header and applies to all detail rows, which is not expected to happen. I need to apply each header row attribute value to applied it's child detail rows. Here is the example. Each <b>H - Header</b> Record attribute value <b>1001</b>, <b>1002 </b>and <b>1003</b>, I need to stamp to each respective detail record while loading via SQL Loader. H ABC 1001 D XYZ 89.90 D XYZ 89.91 D XYZ 89.92 H ABC 1002 D XYZ 89.90 D XYZ 89.91 D XYZ 89.92 H ABC 1003 D XYZ 89.90 D XYZ 89.91 D XYZ 89.92 The expected results should be in database table after SQL loader is completed as follows, which does not happen. Any suggestions! H ABC 1001 D XYZ 89.90 1001 D XYZ 89.91 1001 D XYZ 89.92 1001 H ABC 1002 D XYZ 89.90 1002 D XYZ 89.91 1002 D XYZ 89.92 1002 H ABC 1003 D XYZ 89.90 1003 D XYZ 89.91 1003 D XYZ 89.92 1003 Thank you.
Categories: DBA Blogs

Impact of proper column precision for analytical queries

The Oracle Instructor - Fri, 2021-06-11 04:33

Does it matter if your data warehouse tables have columns with much higher precision than needed? Probably: Yes.

But how do you know the precision of your columns is larger than required by the values stored in these columns? In Exasol, we have introduced the function MIN_SCALE to find out. I’m working on an Exasol 7 New Features course at the moment, and this article is kind of a sneak preview.

If there’s an impact, it will show only with huge amounts of rows of course. Would be nice to have a table generator to give us large testing tables. Another Exasol 7 feature helps with that: The new clause VALUES BETWEEN.


CREATE OR REPLACE TABLE scaletest.t1 AS
SELECT CAST(1.123456789 AS DECIMAL(18,17)) AS testcol
FROM VALUES BETWEEN 1 AND 1e9;

This generates a table with 1000 million rows and takes only 30 seconds runtime on my modest VirtualBox VM. Obviously, the scale of the column is too large for the values stored there. But if it wouldn’t be that obvious, here’s how I can find out:

SELECT MAX(a) FROM (SELECT MIN_SCALE(testcol) As a FROM scaletest.t1);

This comes back with the output 9 after 20 seconds runtime, telling me that the precision actually required by the values is 9 at max. I’ll create a second table for comparison with only the required scale:


CREATE OR REPLACE TABLE scaletest.t2 AS
SELECT CAST(1.123456789 AS DECIMAL(10,9)) AS testcol
FROM VALUES BETWEEN 1 AND 1e9;

So does it really matter? Is there a runtime difference for analytical queries?

SELECT COUNT(*),MAX(testcol) FROM t1; -- 16 secs runtime
SELECT COUNT(*),MAX(testcol) FROM t2; -- 7 secs runtime

My little experiment shows, the query running on the column with appropriate scale is twice as fast than the one running on the too large scaled column!

It would be beneficial to adjust the column precision according to the scale the stored values actually need, in other words. With statements like this:

ALTER TABLE t1 MODIFY (testcol DECIMAL(10,9));

After that change, the runtime goes down to 7 seconds as well for the first statement.

I was curious if that effect shows also on other databases, so I prepared a similar test case for an Oracle database. Same tables but only 100 million rows. It takes just too long to export tables with 1000 million rows to Oracle, using VMs on my notebook. And don’t even think about trying to generate 1000 million row tables on Oracle with the CONNECT BY LEVEL method, that will just take forever – or more likely break with an out-of-memory error.

The effect shows also with 100 million row tables on Oracle: 5 seconds runtime with too large precision and about 3 seconds with the appropriately scaled column.

Conclusion: Yes, looks like it’s indeed sensible to format table columns according to the actual requirements of the values stored in them and it makes a difference, performancewise.

Categories: DBA Blogs

Cannot alter Oracle sequence start with

Tom Kyte - Thu, 2021-06-10 22:46
I have tried altering the start with value of an Oracle sequence but I face [<b>ORA-02283: cannot alter starting sequence number</b>] error. I tried to find why Oracle does not allow the same but I could not find an appropriate answer for that. So my question is why is it that Oracle does not let you change a sequence start with value? (PS: I am hoping there should be a really solid technical reason behind this) Thanks in advance!
Categories: DBA Blogs

INSERT data in TIMESTAMP column having year less than 4713 i.e. timestamp like '01/01/8999 BC'

Tom Kyte - Thu, 2021-06-10 22:46
Want to insert data in TIMESTAMP column having year less than 4713 i.e. timestamp like '01/01/8999 BC' so here year is -8999 When I tries to insert it gives me error like 'YEar should be between -4713 and 9999'. BUt for some table I am having year less than 4713 as well like -8999 and -78888 So this limit is not applicable to timestamp data type. But then How to insert into timestamp for year -8999
Categories: DBA Blogs

Specified tablespace for IOT

Tom Kyte - Thu, 2021-06-10 22:46
I have two tablespaces named USERS and INDX, respectively. The dufault tablespace for current user is USERS. I created an IOT table whose name is tb_zxp. Since no need to specify a tablespace storing data for IOT, I'd like to put the whole index of tb_zxp on tablespace INDX? <code>create table tb_zxp (customer_id integer , store_id integer, trans_date date, amt number, goods_name varchar2(20), rate number(8,1), quantity integer, constraint pk_zxp primary key (customer_id,store_id,trans_date)) organization index including amt overflow tablespace indx; insert into tb_zxp values (11,21,date '2021-04-10',500,'Cocacola',2,250); insert into tb_zxp values (11,25,date '2021-04-11',760,'Tower',3.8,200); insert into tb_zxp values (24,9,date '2021-05-11',5200,'Washing machine',5200,1); commit;</code> However, with this query, we can find the index is still assigned on default tablespace USERS: <code>select tablespace_name from user_extents where segment_name in (select 'TB_ZXP' c from dual union select index_name from user_indexes where table_name='TB_ZXP'); TABLESPACE_NAME ------------------------------------------------------------------------------------------ USERS</code> Then,I remove the INCLUDING OVERFLOW clause from table creation statement, and try it again? <code>create table tb_zxp (customer_id integer , store_id integer, trans_date date, amt number, goods_name varchar2(20), rate number(8,1), quantity integer, constraint pk_zxp primary key (customer_id,store_id,trans_date)) organization index tablespace indx; insert into tb_zxp values (11,21,date '2021-04-10',500,'Cocacola',2,250); commit;</code> This time, the index falls upon tablespace INDX as expected: <code>select tablespace_name from user_extents where segment_name in (select 'TB_ZXP' c from dual union select index_name from user_indexes where table_name='TB_ZXP'); TABLESPACE_NAME ------------------------------------------------------------------------------------------ INDX</code> Could any guru kindly explain why the removal of including overflow can provide us desired result?
Categories: DBA Blogs

Using analytical functions for time period grouping

Tom Kyte - Thu, 2021-06-10 22:46
Hi Tom, I have a table like below: <code>GRP,SUBGRP,START_Y,END_Y 122,... 123,A,2010,2011 123,A,2011,2012 123,A,2012,2013 123,A,2013,2014 123,B,2014,2015 123,B,2015,2016 123,B,2016,2017 123,A,2017,2018 123,A,2018,2019 123,A,2019,2020 124,...</code> I would like to find start and end of all intervals in this table like so: <code>GRP,SUBGRP,MIN,MAX 122,... 123,A,2010,2014 123,B,2014,2017 123,A,2017,2020 124,...</code> A simple group by would show the results over the complete timeperiod but not over the different intervals: <code>GRP,SUBGRP,MIN,MAX 122,... 123,A,2010,2020 123,B,2014,2017 124,...</code> I think it should be possible with analytic functions but I don't get it.
Categories: DBA Blogs

Cloud Vanity: A Weekly Carnival of AWS, Azure, GCP, and More - Edition 5

Pakistan's First Oracle Blog - Thu, 2021-06-10 21:09

 Welcome to the next edition of weekly Cloud Vanity. As usual, this edition casts light on multiple cloud providers and what's happening in their sphere. From the mega players to the small fish on the ocean, it has covered it all. Enjoy!!!

AWS:

Reducing risk is the fundamental reason organizations invest in cybersecurity. The threat landscape grows and evolves, creating the need for a proactive, continual approach to building and protecting your security posture. Even with expanding budgets, the number of organizations reporting serious cyber incidents and data breaches is rising.

Streaming data presents a unique set of design and architectural challenges for developers. By definition, streaming data is not bounded, having no clear beginning or end. It can be generated by millions of separate producers, such as Internet of Things (IoT) devices or mobile applications. Additionally, streaming data applications must frequently process and analyze this data with minimal latency.

This post presents a solution using AWS Systems Manager State Manager that automates the process of keeping RDS instances in a start or stop state.

Over the last few years, Machine Learning (ML) has proven its worth in helping organizations increase efficiency and foster innovation. 

GCP:

In recent years, the grocery industry has had to shift to facilitate a wider variety of checkout journeys for customers. This has meant ensuring a richer transaction mix, including mobile shopping, online shopping, in-store checkout, cashierless checkout or any combination thereof like buy online, pickup in store (BOPIS).  

At Google I/O this year, we introduced Vertex AI to bring together all our ML offerings into a single environment that lets you build and manage the lifecycle of ML projects. 

Dataflow pipelines and Pub/Sub are the perfect services for this. All we need to do is write our components on top of the Apache Beam sdk, and they’ll have the benefit of distributed, resilient and scalable compute.

In a recent Gartner survey of public cloud users, 81% of respondents said they are working with two or more providers. And as well you should! It’s completely reasonable to use the capabilities from multiple cloud providers to achieve your desired business outcomes. 

Azure:

Generators at datacenters, most often powered by petroleum-based diesel, play a key role in delivering reliable backup power. Each of these generators is used for no more than a few hours a year or less at our datacenter sites, most often for routine maintenance or for backup power during a grid outage. 

5 reasons to attend the Azure Hybrid and Multicloud Digital Event

For over three years, I have had the privilege of leading the SAP solutions on Azure business at Microsoft and of partnering with outstanding leaders at SAP and with many of our global partners to ensure that our joint customers run one of their most critical business assets safely and reliably in the cloud. 

There are many factors that can affect critical environment (CE) infrastructure availability—the reliability of the infrastructure building blocks, the controls during the datacenter construction stage, effective health monitoring and event detection schemes, a robust maintenance program, and operational excellence to ensure that every action is taken with careful consideration of related risk implications.

Others:

Anyone who has even a passing interest in cryptocurrency has probably heard the word ‘blockchain’ branded about. And no doubt many of those who know the term also know that blockchain technology is behind Bitcoin and many other cryptocurrencies.

Alibaba Cloud Log Service (SLS) cooperates with RDS to launch the RDS SQL audit function, which delivers RDS SQL audit logs to SLS in real time. SLS provides real-time query, visual analysis, alarm, and other functionalities.

How AI Automation is Making a First-of-its-Kind, Crewless Transoceanic Ship Possible

Enterprise organizations have faced a compendium of challenges, but today it seems like the focus is on three things: speed, speed, and more speed. It is all about time to value and application velocity—getting applications delivered and then staying agile to evolve the application as needs arise.

Like many DevOps principles, shift-left once had specific meaning that has become more generalized over time. Shift-left is commonly associated with application testing – automating application tests and integrating them into earlier phases of the application lifecycle where issues can be identified and remediated earlier (and often more quickly and cheaply).

Categories: DBA Blogs

How to Identify the MINVALUE AND MAXVALUE SCN in the flashback versions query

Tom Kyte - Thu, 2021-06-10 04:26
Hi TOM, I am facing an issue with flashback versions query. I have described my issue below. <code> Query 1: select versions_xid XID, versions_startscn START_SCN, versions_endscn END_SCN from employees VERSIONS BETWEEN TIMESTAMP TO_TIMESTAMP('2021-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') AND TO_TIMESTAMP('2021-06-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') where employee_id='xyz' </code> the above query returns 2 records. <code> XID START_SCN END_SCN 0B0017008F7B0300 39280796004 39282671828 [INSERT] 2D001B0016420000 39282671828 (null) [UPDATE] </code> But on passing the versions_startscn value from the 1st query result in the filter condition of 2nd query, I got 0 records returned instead of 1 record. <code>Query 2: select versions_xid XID, versions_startscn START_SCN, versions_endscn END_SCN from employees VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE where versions_endscn = '39282671828' </code> the above query returns 0 records. Is there a way to identify the MINVALUE and MAXVALUE passed in the second query? On what cases the MINVALUE gets set?
Categories: DBA Blogs

v$session query get garbled code

Tom Kyte - Thu, 2021-06-10 04:26
Hi,I'm using oracle 10.2.0.1 for studing, and I queried the v$session using pl/sql developer from a windows pc client ,but I found the garbled code from the results.just as following: <code>SQL> select osuser from v$session; OSUSER -------- SYSTEM ???? SYSTEM abc ??????????</code> then I ran the same command from the DB server,but I got the same results. Here are the characterset: <code>SQL> select userenv('language') from dual; USERENV('LANGUAGE') ---------------------------------------------------- SIMPLIFIED CHINESE_CHINA.ZHS16GBK SQL> select * from v$nls_parameters; PARAMETER VALUE ---------------------------------------------------------------- ---------------------------------------------------------------- NLS_LANGUAGE SIMPLIFIED CHINESE NLS_TERRITORY CHINA NLS_CURRENCY ? NLS_ISO_CURRENCY CHINA NLS_NUMERIC_CHARACTERS ., NLS_CALENDAR GREGORIAN NLS_DATE_FORMAT DD-MON-RR NLS_DATE_LANGUAGE SIMPLIFIED CHINESE NLS_CHARACTERSET ZHS16GBK NLS_SORT BINARY NLS_TIME_FORMAT HH.MI.SSXFF AM NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR NLS_DUAL_CURRENCY ? NLS_NCHAR_CHARACTERSET UTF8 NLS_COMP BINARY NLS_LENGTH_SEMANTICS BYTE NLS_NCHAR_CONV_EXCP FALSE 19 rows selected</code> I also set the environment variable in my windows os : NLS_LANG=SIMPLIFIED CHINESE_CHINA.ZHS16GBK What's more,I tested my db as following: <code>SQL> select '??' from dual; '??' ---- ??</code> Now,Could you please help me,why I got some garbled codes such as ??? when querying osuser from v$session? thanks a lot.
Categories: DBA Blogs

MultiThreaded Extproc Agent - max_sessions, max_task_threads and max_dispatchers parameters

Tom Kyte - Thu, 2021-06-10 04:26
Hello Team, Thanks for all the good work AskTOM is doing. Can you please help us to better understand about max_sessions, max_task_threads and max_dispatchers configuration parameters of MultiThreaded ExtProc agent and share tips on how to determine optimum values of these parameters? We have multiple clients on multiple DB servers. Each server has different H/W capacity and different load. Our understanding is that the final optimum values will depend on H/W configuration and number of external procedure calls. However, we are trying to arrive at an initial parameter configuration that can be fine tuned further based on actual situation. Thanks, AB
Categories: DBA Blogs

Users Privileges

Tom Kyte - Tue, 2021-06-08 15:46
Hello, I am facing a problem and it goes like this: I have a schema named CML that we want to put common objects (Tables, View, Procedures, Packages, etc) used by the team. I would like to know what grants I should give to my user (eliasr) to create tables, views, procedures, packages in the schema CML. I want to also change existing packages and procedures.
Categories: DBA Blogs

PDB lock down profiles.

Tom Kyte - Tue, 2021-06-08 15:46
Team, Is it not possible to have multiple values in the lock down profile? Kindly advice. <code>c##sys@ORA12CR2> drop lockdown profile p1; Lockdown Profile dropped. c##sys@ORA12CR2> create lockdown profile p1; Lockdown Profile created. c##sys@ORA12CR2> alter lockdown profile p1 disable statement = ('ALTER SESSION') clause=('SET') 2 option=('CURSOR_SHARING') 3 value=('FORCE','SIMILAR'); alter lockdown profile p1 disable statement = ('ALTER SESSION') clause=('SET') * ERROR at line 1: ORA-65206: invalid value specified ORA-00922: missing or invalid option c##sys@ORA12CR2> alter lockdown profile p1 disable statement = ('ALTER SESSION') clause=('SET') 2 option=('CURSOR_SHARING') 3 value=('FORCE'); Lockdown Profile altered. c##sys@ORA12CR2></code>
Categories: DBA Blogs

Oracle apex - changing page authorization scheme on custom form

Tom Kyte - Tue, 2021-06-08 15:46
Hi, is it possible to change page authorization scheme ( or another page properties) in oracle apex on some custom build form . I need it for my application admin panel . I know which table is it , it is apex_200200.wwv_flow_list_items but on cloud its forbidden to edit WWV prefixed tables of apex_200200 user. I tried to find API - package to do it but cant find it. On cloud user ADMIN don't have select privilege on that table so this query is not working: select * from apex_200200.wwv_list_items [Error] Execution (1: 27): ORA-01031: insufficient privileges I tried to use this package but I'm getting no results : begin WWV_FLOW_API.update_page(p_id=>124,p_flow_id=>122,p_required_role=>'7.59043067032354E15'); end; Is there any way to do this through some other package in database.
Categories: DBA Blogs

“User created” PDB max before licensing multi tenant

Tom Kyte - Tue, 2021-06-08 15:46
Hey guys, Regarding https://blogs.oracle.com/database/oracle-database-19c-up-to-3-pdbs-per-cdb-without-licensing-multitenant-v2 And the documentation it refers to. I?m wondering if you can clarify how the 3 PDB limit (before additional licensing ) works with application containers. I?ve tried setting max_pdbs=3, then creating an application container and when I create the PDBs within that application container if I try to create 3 PDBs I get an error on creating the 3rd (max PDBs exceeded). The documentation isn?t clear on what a user created PDB is (in my opinion at least) when dealing with this type of set up so I?m not sure if the error is a bug or it?s enforcing things appropriately. Thanks ! <code> alter system set max_pdbs=3 scope=both sid='*'; ALTER SESSION SET container = CDB$ROOT; CREATE PLUGGABLE DATABASE App_Con AS APPLICATION CONTAINER ADMIN USER app_admin IDENTIFIED BY <pass>; ALTER PLUGGABLE DATABASE App_Con OPEN instances=all; ALTER SESSION SET container = App_Con; CREATE PLUGGABLE DATABASE PDB1 ADMIN USER pdb_admin IDENTIFIED BY <pass>; ALTER PLUGGABLE DATABASE PDB1 OPEN instances=all; CREATE PLUGGABLE DATABASE PDB2 ADMIN USER pdb_admin IDENTIFIED BY <pass>; ALTER PLUGGABLE DATABASE PDB2 OPEN instances=all; --below fails CREATE PLUGGABLE DATABASE PDB3 ADMIN USER pdb_admin IDENTIFIED BY <pass>; ALTER PLUGGABLE DATABASE PDB3 OPEN instances=all; </code>
Categories: DBA Blogs

Ampersand in input string

Tom Kyte - Tue, 2021-06-08 15:46
I have an input string I can't change. Unfortunately this input string contains an ampersand (&). I was hoping to be able to create a function which could translate an URL into something more readable text. The input string will change but an example could be as following: <i>'pw://pw.vd.dk:Vejdirektoratet/Documents/Bigger&space;projects/7x/Workflow.JPG' </i> and in this case the wanted output string would be as following <i>'Vejdirektoretet\Documents\Bigger projects\7x\Workflow.JPG'</i> My hope is to be able to create a function URL2PATH which could be called as: <code>select URL2PATH('pw://pw.vd.dk:Vejdirektoratet/Documents/Bigger&space;projects/7x/Workflow.JPG') from dual;</code> with the wanted result mentioned above. If that is not possible when the input string contains an ampersand, I secondary would like a workaround to call the function with some additional addons but I haven't been able to figure out any solution for that either. I have been looking at REPLACE, TO_CHAR, TRANSLATE etc. but until now without any succes. One of my challenges is, that this function has to be called from an outside program, so the solution by using 'set define off' ahead of the call doesn't seems to be an option.
Categories: DBA Blogs

Full Fast Index Scans with predicates

Tom Kyte - Fri, 2021-06-04 20:06
You have helped me previously setting up skinny indexing here: QUESTION_ID:9542855000346335758 This has been implemented, and is now in production and working really well and is super fast; thanks for all the previous help with this. I have a follow up question. Although most of the orders tables have millions of rows, unprocessed orders are generally only a few thousand rows, and so only hitting skinny indexes by indexing virtual columns only holding data for unprocessed orders as you explained in the question above has worked brilliantly. In most cases, a FAST FULL SCAN is performed on the indexes as they are relatively small, and we select most of the data held in those indexes. What I find odd is when I look at the explain plan in SQL Developer, if I add a predicate to the query, it's not shown in the explain plan. The explain plan remains the same i.e. just a FAST FULL SCAN. If I don't see the predicate in the explain plan, where is that being processed? I know a FAST FULL SCAN grabs all the data from the index in one go. Would I not still see the predicate in the explain plan. I can send screen shots from sql dev if that helps. Thanks.
Categories: DBA Blogs

Blocking TRUNCATE on partitions

Tom Kyte - Fri, 2021-06-04 01:46
We have a scenario where we are considering using partitions on a control table. The control table will be hit with regular CRUD operations coming from triggers on transaction tables hence we want to address fragmentation by running regular TRUNCATE table sql. My questions are - (1) What is a good strategy to handle fragmentation in high usage tables? (2) Is it possible to lock a partition in a table and execute a TRUNCATE on that partitions. Thanks Deepankar
Categories: DBA Blogs

How to restrict a user with DBA role from directly executing a package?

Tom Kyte - Fri, 2021-06-04 01:46
Hello Asktom team, Thanks for all the good work you are doing. Can you please help us with the following questions? <b>A.</b> Can we restrict or impede a user having DBA role from directly executing a particular package? This package is defined in another user's schema and can invoke third party libraries. We need to do restrict it's execution because of a security requirement imposed by the third party. One option that we are aware about is DB vault. However, looks like this option will impact the full DB. <b>B.</b> Is it possible to define something like an access control list on the package to ensure that it can only be used by specified users? Let's say, User1 has defined lots of packages and among them there is a special package named SecurePackage1. User2, User3 and User4 execute all of the User1's packages using "Execute Any Procedure" privilege. Do we have any mechanism using which we can specifically define the users that can execute the SecurePackage1 package? Thanks, AB
Categories: DBA Blogs

Cloud Vanity: A Weekly Carnival of AWS, Azure, GCP, and More - Edition 4

Pakistan's First Oracle Blog - Thu, 2021-06-03 19:31

 Welcome to the next edition of weekly Cloud Vanity. Foundation of any cloud matters. Cloud is and always will be a distributed hybrid phenomenon. That is why architecting, developing, and operating a hybrid mix of workload require stable, scalable and reliable cloud technologies. This edition discusses few of them from across different clouds out there.


AWS:

AWS SAM or Serverless Application Model is an open source framework that you can use to develop, build and deploy your serverless applications.

Pluralsight, Inc., the technology workforce development company, today announced that it has entered into a definitive agreement to acquire A Cloud Guru (ACG).

AWS Lambda Extensions are a new way to integrate your favorite operational tools for monitoring, observability, security, and governance with AWS Lambda. Starting today, extensions are generally available with new performance improvements and an expanded set of partners including Imperva, Instana, Sentry, Site24x7, and the AWS Distro for OpenTelemetry.

Amazon SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. 

Let’s say your Python app uses DynamoDB and you need some unit tests to verify the validity of your code, but you aren’t sure how to go about doing this.

Azure:

Personal access tokens (PATs) make it easy to authenticate against Azure Devops to integrate with your tools and services. However, leaked tokens could compromise your Azure DevOps account and data, putting your applications and services at significant risk.

Azure announces general availability of scale-out NVIDIA A100 GPU Clusters: the fastest public cloud supercomputer.

La Liga, the foremost Spanish football league, has expanded its partnership with Microsoft Azure to focus on machine learning (ML), over the top (OTT) services, as well as augmented reality.

A little over a year ago, Microsoft Build 2020 was Microsoft’s first flagship event to become all-digital early in the COVID-19 pandemic.

Generators at datacenters, most often powered by petroleum-based diesel, play a key role in delivering reliable backup power. Each of these generators is used for no more than a few hours a year or less at our datacenter sites, most often for routine maintenance or for backup power during a grid outage. 

GCP:

Having constant access to fresh customer data is a key requirement for PedidosYa to improve and innovate our customer’s experience. Our internal stakeholders also require faster insights to drive agile business decisions. 

5 ways Vertex Vizier hyperparameter tuning improves ML models

Getting started with Kubernetes is often harder than it needs to be. While working with a cluster “from scratch” can be a great learning exercise or a good solution for some highly specialized workloads, often the details of cluster management can be made easier by utilizing a managed service offering. 

Zero-trust managed security for services with Traffic Director

Databases are part of virtually every application you run in your organization and great apps need great databases. This post is focused on one such great database—Cloud Spanner.

Others:

Kubernetes is a robust yet complex infrastructure system for container orchestration, with multiple components that must be adequately protected. 

It is no contradiction to say that being ‘cloud-native’ has not much to do with cloud computing. There is an idea that cloud is a place, a suite of technologies or services that run somewhere in data centres. But the cloud is not a place; it is a way of working.

The most innovative companies of 2021 according to BCG: Alphabet, Amazon, Microsoft all make it.

In this article, the author discusses how cloud computing has changed the traditional approach to operation and maintenance (O&M).

This June, a small marine research non-profit with a huge vision will launch a first-of-its-kind, crewless transoceanic ship that will attempt to cross the Atlantic Ocean without human intervention.

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs