Feed aggregator

Funny Twitter Names

VitalSoftTech - Tue, 2019-10-08 09:45

If there is anyone who knows about the importance of having a funny Twitter name, it is the people who already use Twitter. Twitter is one of the most popular social media platforms. This microblogging website amounts up to an average of 330 Million active users every month over the globe. Fun Fact: Jack Dorsey, […]

The post Funny Twitter Names appeared first on VitalSoftTech.

Categories: DBA Blogs

Billboards to Dashboards: How OUTFRONT Media is Gaining Insights into Market Trends

Oracle Press Releases - Tue, 2019-10-08 09:00
Blog
Billboards to Dashboards: How OUTFRONT Media is Gaining Insights into Market Trends

By Peter Schutt, Senior Director, Oracle—Oct 8, 2019

OUTFRONT Media, which manages more than 500,000 outdoor advertising canvases, including the New York City MTA and the Bay Area’s BART, is getting smarter and faster in showing its customers the advantages of outdoor ads.

The longtime Oracle Database Cloud customer this year upgraded to Oracle’s Autonomous Data Warehouse, gaining more-robust data-crunching with machine learning capabilities for faster time to market, enhanced performance and scalability, and a more flexible consumption-based cost model. In combination with Oracle Analytics, OUTFRONT’s Technology Services organization is collaborating with business lines to quickly create valuable reports and dashboards and make it easier to analyze revenue trends and identify opportunities within advertisers’ spend profiles.

For example, OUTFRONT is empowering hundreds of its sales professionals and executives with data visualization and analytics dashboards that incorporate third-party media spend data to quickly create a comprehensive view of a customer’s total advertising spend across all markets and media—outdoor, internet, TV, and radio—and make recommendations on how advertisers can more strategically utilize out-of-home in its media mix. Now, with Autonomous Data Warehouse, a powerful database is provisioned in minutes versus months, and terabytes of third-party data are loaded in minutes and securely published in interactive dashboards to the salesforce.

“The strategic insights that we gain from implementing Oracle Autonomous Data Warehouse can help our business tremendously. We can easily examine media spend on behalf of our advertisers and show them how their investment would perform better by shifting spend to outdoor. It helps us achieve maximum results for our customers, which in turn grows our business,” said Derek Hayden Vice President, Data Strategy and Analytics, OUTFRONT Media.

Watch the OUTFRONT Media Video

Watch this video to hear Derek Hayden Vice President, Data Strategy and Analytics, share how OUTFRONT Media is innovating sales with Oracle Autonomous Data Warehouse.

embedBrightcove('responsive', false, 'single', '6086094966001');

Read More Oracle Cloud Customer Stories

OUTFRONT Media is one of the thousands of customers from around the world on its journey to cloud. Read about others in Stories from Oracle Cloud: Business Successes

Oracle Cloud Infrastructure Momentum Accelerates with New Hires

Oracle Press Releases - Tue, 2019-10-08 07:00
Press Release
Oracle Cloud Infrastructure Momentum Accelerates with New Hires Nearly 2,000 new employees will support customer growth, product innovation, and data center expansion

Redwood Shores, Calif.—Oct 8, 2019

Oracle today announced plans to hire nearly 2,000 employees worldwide to work on its growing Oracle Cloud Infrastructure business. The new roles, which include software development, cloud operations, and business operations, will support Oracle’s rapidly expanding infrastructure customer base, and come as the company rolls out new product innovations and rapidly opens cloud regions around the globe.

“Cloud is still in its early days with less than 20 percent penetration today, and enterprises are just beginning to use cloud for mission-critical workloads,” said Don Johnson, executive vice president, Oracle Cloud Infrastructure. “Our aggressive hiring and growth plans are mapped to meet the needs of our customers, providing them reliability, high performance, and robust security as they continue to move to the cloud.”

Oracle Cloud Infrastructure’s portfolio has experienced significant growth. Recent product innovations include new automated cloud security services, the launch of Autonomous Linux, and a host of new cloud data services. Only Oracle Gen 2 Cloud is built to run Oracle’s leading suite of enterprise cloud applications and uses machine learning to deliver category-defining autonomous services, including Oracle Autonomous Database and Oracle Autonomous Linux. Additionally, Oracle is the only cloud infrastructure company in the world that delivers enterprise applications. This gives customers huge cost and competitive advantages and enables them to extend their applications as they grow. 

In the past year, Oracle has opened 12 new Gen 2 Cloud regions and currently operates 16 regions globally, the fastest expansion by any major cloud provider. Continuing its rapid cadence of Oracle Gen 2 Cloud region launches, Oracle plans to add 20 more regions by the end of 2020, bringing the global footprint to 36 total regions. Eleven countries or jurisdictions will have region pairs that facilitate enterprise-class, multi-region, disaster-recovery strategies to better support those customers who want to store their data in-country or in-region. 

Today, Oracle is the only company delivering a complete and integrated set of cloud services and building intelligence into every layer of the cloud. Oracle Cloud Infrastructure’s growing talent base will ensure customers continue to benefit from best-in-class security, consistent high performance, simple predictable pricing, and the tools and expertise needed to bring enterprise workloads to cloud quickly and efficiently.

In addition to rapid hiring, Oracle will make additional real estate investments to support the expanded Oracle Cloud Infrastructure workforce.

Contact Info
Jessica Moore
Oracle
+1.650.506.3297
jessica.moore@oracle.com
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Future Product Disclaimer

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.

Forward-Looking Statements Disclaimer

Statements in this article relating to Oracle’s future plans, expectations, beliefs, and intentions are “forward-looking statements” and are subject to material risks and uncertainties. Many factors could affect Oracle’s current expectations and actual results, and could cause actual results to differ materially. A discussion of such factors and other risks that affect Oracle’s business is contained in Oracle’s Securities and Exchange Commission (SEC) filings, including Oracle’s most recent reports on Form 10-K and Form 10-Q under the heading “Risk Factors.” 

These filings are available on the SEC’s website or on Oracle’s website at http://www.oracle.com/investor. All information in this article is current as of October 8, 2019 and Oracle undertakes no duty to update any statement in light of new information or future events.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Jessica Moore

  • +1.650.506.3297

Nicole Maloney

  • +1.650.506.0806

Oracle Named a Leader in the 2019 Gartner Magic Quadrant for Data Integration Tools for the 11th Consecutive Year

Oracle Press Releases - Tue, 2019-10-08 07:00
Press Release
Oracle Named a Leader in the 2019 Gartner Magic Quadrant for Data Integration Tools for the 11th Consecutive Year Oracle named in Leaders quadrant based on its ability to execute and for completeness of vision

Redwood Shores, Calif.—Oct 8, 2019

Oracle has been named a Leader in Gartner’s 2019 Magic Quadrant for Data Integration Tools report for the 11th consecutive year. This year’s report states, “The data integration tool market is resurging as new requirements for hybrid/intercloud integration, active metadata and augmented data management force a rethink of existing practices.”

“We believe being recognized as a Leader in the Data Integration Tools category for more than a decade highlights Oracle’s ongoing commitment to innovation around the industry’s most challenging data issues,” said Jeff Pollock, vice president product management, Oracle. “With more enterprises moving to cloud or hybrid-cloud environments, it’s important that we continue to invest in our open platform. Not only do we help customers pull from hundreds of Oracle and non-Oracle sources based on their unique environments, but we help deliver value quickly by simplifying data tasks providing intuitive self-service for IT and business users.”

Gartner estimates that “By 2021, more than 80% of organizations will use more than one data delivery style to execute their data integration use cases.” Oracle’s data integration solution, including Oracle GoldenGate, Oracle Data Integrator and Oracle Enterprise Data Quality, deliver a proven and comprehensive solution to simplify enterprise data integration.

Oracle data integration allows enterprises to access and manipulate hundreds of data sources, whether on premises or in the cloud, and accept any data in any shape or format. This solution offers exciting opportunities to accelerate business transformation across a broad spectrum of enterprise customers and partners. We accomplish this by incorporating machine learning and artificial intelligence-powered features to help service all data integration needs.

Oracle also supports its customers by offering a vast network of technical consultants and service providers across its global partner network to aid in the implementation and management of their data integration technologies.

Download a complimentary copy of Gartner’s 2019 Magic Quadrant for Data Integration Tools here.

*Gartner “Magic Quadrant for Data Integration Tools” by Ehtisham Zaidi, Eric Thoo, Nick Heudecker. August 1 2019.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Contact Info
Travis Anderson
Oracle
208-880-8134
travis.j.anderson@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Travis Anderson

  • 208-880-8134

Be Careful When Subscribing To Oracle Learning Subscription

Michael Dinh - Mon, 2019-10-07 18:53

Subscribing to Oracle Learning Subscription seems good in theory but bad in reality.

Oracle support informed. “Oracle University’s policy regarding Learning Subscription courseware materials is that they cannot be downloaded by customers.”

How convenience of Oracle as the info should have been stated at https://education.oracle.com/oracle-learning-subscriptions

Took for granted materials can be downloaded since they are made available to download for all other training formats.

This seems to be a deceptive process by not disclosing the information. because by the time one has subscribe to find the lack of full disclosure, it may be too late.

Hopefully, this will help anyone to avoid the same mistake.

 

SQL Server 2019 Accelerated Database Recovery – Instantaneous rollback and aggressive log truncation

Yann Neuhaus - Mon, 2019-10-07 11:37

In my previous article about Accelerated Database Recovery (ADR), I wrote mostly about the new Persistent Volume Store (PVS), how important it was important in the new SQL database engine recovery process and the potential impact it may have on the application workload. This time let’s talk a little bit more about ADR feature benefits we may get with instantaneous rollback and aggressive log truncation. These two capabilities will address some DBA pains especially when rollback or crash recovery kick in with open long running transactions.

 

First, let’s set the context by running the following long running transaction without ADR enabled:

BEGIN TRAN;
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
ROLLBACK TRAN;

 

The above query generates 10GB of log records (roughly 90% of the total transaction log space size) as shown below:

SELECT 
	DB_NAME(database_id) AS database_name,
	total_log_size_in_bytes / 1024 / 1024 / 1024 AS total_GB,
	used_log_space_in_bytes / 1024 / 1024 / 1024 AS used_GB,
	used_log_space_in_percent
FROM sys.dm_db_log_space_usage

 

Before cancelling my previous query to trigger a rollback operation, let’s run the following concurrent update:

BEGIN TRAN;

DECLARE @begin_date DATETIME = GETDATE();

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1
Where TransactionID = 1;

DECLARE @end_date DATETIME = GETDATE();

SELECT DATEDIFF(SECOND, @begin_date, @end_date);

 

As expected, the second query is blocked during the rollback process of the first one because they compete on the same resource:

SELECT 
	spid,
	blocked,
	lastwaittype,
	waitresource,
	cmd,
	program_name
FROM sys.sysprocesses
WHERE spid IN (64, 52)

 

In my case, the second query was blocked during 135s. Regarding your scenario, it could be less or more. I experienced this annoying issue myself at some customer shops and I’m pretty sure it is the case of many of SQL Server DBAs.

Let’s now perform the same test after enabling ADR. Executing the query below (used in my first test) gave interesting results.

BEGIN TRAN;
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

 

First, rolling back the transaction was pretty instantaneous and the concurrent query executed faster without being blocked by ROLLBACK process. This is where the logical revert comes into play. As stated to the Microsoft documentation, when rollback is triggered all locks are released immediately. Unlike the usual recovery process, ADR uses the additional PVS is to cancel operations for identified aborted transactions by restoring the latest committed version of concerned rows. The sys.dm_tran_aborted_transactions DMV provides a picture of aborted transactions:

SELECT *
FROM sys.dm_tran_aborted_transactions;
GO

 

For a sake of curiosity, I tried to dig further into the transaction log file to compare rollback operations between usual recovery process and ADR-based recovery process. I used a simpler scenario including a simple dbo.test_adr table with one id column and that consists in insert 2 rows and updating them afterwards. To get log record data, the sys.fn_db_log function is your friend.

CREATE TABLE dbo.test_adr (
	id INT
);

CHECKPOINT;

BEGIN TRAN;

INSERT INTO dbo.test_adr ( id ) VALUES (1), (2);

UPDATE dbo.test_adr SET id = id + 1;

ROLLBACK TRAN;

SELECT 
	[Current LSN]
	,Operation
	,Context
	,[Transaction ID]
	,[Lock Information]
	,[Description]
       ,[AllocUnitName]
FROM sys.fn_dblog(NULL, NULL);

 

Without ADR, we retrieve usual log records for rollback operations including compensation records and the transaction’s end rollback mark. In such case, remind the locks are released only at the end of the rollback operation (LOP_ABORT_XACT). 

With ADR, the story is a little bit different:

Let’s precise it is only a speculation stuff from my own here and I just tried to correlate information from the Microsoft documentation. So, don’t take my word for it. When a transaction is roll backed, it is marked as aborted and tracked by the logical revert operation. The good news is that locks are immediately released afterwards. My guess is that the LOP_FORGET_XACT record corresponds to the moment when the transaction is marked as aborted and since this moment no blocking issues related to the ROLLBACK can occur. At the same time the logical revert is an asynchronous process and comes into play by providing instantaneous transaction rollback and undo for all versioned operations by using the PVS. 

 

Second, reverting to this first test scenario …

BEGIN TRAN;
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

 

… I noticed the transaction log file space was used differently and even less compared to my first previous test without ADR enabled. I performed the same test several times and I got results in the same order of magnitude.  

I got some clues by adding some perfmon counters:

  • SQL Server databases:PVS in-row diff generated/sec
  • SQL Server databases:PVS off-row records generated/sec
  • SQL Server databases:Percent Log Used
  • SQL Server Buffer Manager:Checkpoints/sec

My two update operations use different storage strategies for storing row versions. Indeed, in the first shot, row versions fit in the data page whereas in the second shot SQL Server must go through the off-row storage to store additional versions. In addition, we are also seeing interesting behavior of the ADR sLog component with the aggressive log truncation at the moment of the different checkpoint operations. Due to the changes in logging, only certain operations require log space and because the sLog content on every checkpoint operation it makes possible aggressive log truncation.

In my case, it led to keep under control the space used by my long running transaction even if it is in open state.

In this blog post we’ve seen how ADR may address database unavailability paint point through instantaneous rollback and aggressive log truncation. Good to know that we may get benefit from such features in SQL Server 2019!

See you!

 

 

Cet article SQL Server 2019 Accelerated Database Recovery – Instantaneous rollback and aggressive log truncation est apparu en premier sur Blog dbi services.

Oracle Recognized as a Leader in Gartner Magic Quadrant for Cloud Financial Planning and Analysis Solutions for Oracle EPM Cloud

Oracle Press Releases - Mon, 2019-10-07 07:00
Press Release
Oracle Recognized as a Leader in Gartner Magic Quadrant for Cloud Financial Planning and Analysis Solutions for Oracle EPM Cloud Oracle recognized for completeness of vision and highest for ability to execute

Redwood Shores, Calif.—Oct 7, 2019

Oracle has been named a Leader in Gartner’s 2019 “Magic Quadrant for Cloud Financial Planning and Analysis Solutions” report for the third consecutive year. Out of 15 companies evaluated, Oracle is positioned as a Leader based on its ability to execute and completeness of vision. A complimentary copy of the report is available here.

According to the report, “Leaders provide mature offerings that meet market demand and have demonstrated the vision necessary to sustain their market position as requirements evolve. Leaders have also demonstrated execution success by having revenue commensurate with other high-performing leaders in this market study, as well as by having a larger number of customers. The hallmark of Leaders is that they focus on and invest in their offerings to the point in which they lead the market and can affect its overall direction. As a result, Leaders can be vendors to watch as you try to understand how new market offerings might evolve. Leaders typically possess a large, satisfied customer base (relative to the size of the market) and enjoy high visibility within the market. Their size and financial strength enable them to remain viable in a challenging economy. Leaders typically respond to a wide market audience by supporting broad market requirements. However, they may fail to meet the specific needs of vertical markets or other, more specialized segments.”

“To help organizations of all sizes navigate a complex and dynamic global business environment, Oracle remains committed to providing the innovation and advanced technologies that can help our EPM Cloud customers achieve superior business value and competitive advantage,” said Hari Sankar, Group Vice President, Product Management, Oracle. “We are thrilled to be acknowledged once again as a Leader by Gartner. We believe this report further validates our product strengths, investment focus and customer successes.”

Oracle EPM Cloud is the only complete and connected EPM solution on a common platform that addresses financial and operational planning, consolidation and close, data management, reporting, and analysis processes. With native integration with the broader Oracle Cloud Applications suite, which includes Enterprise Resource Planning (ERP), Supply Chain Management (SCM), Human Capital Management (HCM) and Customer Experience (CX) SaaS applications, Oracle helps customers to stay ahead of changing expectations, build adaptable organizations, and realize the potential of the latest innovations.

Gartner, Magic Quadrant for Cloud Financial Planning and Analysis Solutions, Robert Anderson, John Van Decker, Greg Leiter, 8 August 2019
 

Gartner Disclaimer
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Contact Info
Bill Rundle
Oracle
+1.650.506.1891
bill.rundle@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Bill Rundle

  • +1.650.506.1891

Resumable

Jonathan Lewis - Mon, 2019-10-07 04:31

There are two questions about temporary space that appear fairly regularly on the various Oracle forums. One is of the form:

From time to time my temporary tablespace grows enormously (and has to be shrunk), how do I find what’s making this happen?

The other follows the more basic pattern:

My process sometimes crashes with Oracle error: “ORA-01652: unable to extend temp segment by %n in tablespace %s” how do I stop this happening?

Before moving on to the topic of the blog, it’s worth pointing out two things about the second question:

  • First, it’s too easy to get stuck at the word temp and leap to the conclusion that the problem is about the temporary tablespace without noticing that the error message includes the specific tablespace that’s raised the problem. If, for example, you rebuild an index in a nominated tablespace Oracle first creates the index as a temporary segment (with a name like {starting_file_number}.{starting_block_number}) in that tablespace then renames it to match the original index name once the rebuild is complete and drops the old index.
  • Secondly a process that raises ORA-01652 isn’t necessarily the guilty party – it may be the victim of some other process hogging all the available space when it shouldn’t. Moreover that other process may have completed and released its space by the time you start looking for the problem – causing extra confusion because your process seems to have crashed without a cause. Taking my example of an index rebuild – your index rebuild may fail because someone else was rebuilding a different index at the same time in the same tablespace; but when you check the tablespace all the space from their original index is now free as their rebuild completed in the interim.

So, before you start chasing something that you think is a problem with your code, pause a moment to double-check the error message and think about whether you could have been the victim of some concurrent, but now complete, activity.

I’ve listed the two questions as variants on the same theme because the workaround to one of them introduces the risk of the other – if you want to avoid ORA-01652 you could make all your data files and temp files “autoextensible”, but then there may be occasions when they extend far too much and you need to shrink them down again (and that’s not necessarily easy if it’s not the temporary tablespace). Conversely, if you think your data or temp files randomly explode to ludicrous sizes you could decide on a maximum size for your files and disable autoextension – then handle the complaints when a user reports an ORA-01652.

There are various ways you could monitor your system in near real time to spot the threat as it builds, of course; and there are various ways to identify potentially guilty SQL after the event. You could keep an eye on various v$ dynamic performance views or dba_ administrative views to try and intercept a problem; you could set event 1652 to dump an errorstack (or even systemstate) for post-crash analysis to see what that reported. Neither is an ideal solution – one requires you to pay excessive attention to the system, the other is designed to let the problem happen then leave you to clean up afterwards.  There is, however, a strategy that may stop the problem from appearing without requiring constant monitoring. The strategy is to enable (selectively) resumable operations.

If a resumable operation needs to allocate space but is unable to do so – i.e. it would normally be about to raise ORA-01652 – it will suspend itself for a while going into the wait state “statement suspended, wait error to be cleared” which will show up as the event in v$session_wait, timing out every 2 seconds The session will also be reporting its current action in the view v$resumable or, for slightly more information, dba_resumable. As it suspends the session will also write a message to the alert log but you can also create an “after suspend” database trigger to alert you that a problem has occurred.

If you set the resumable timeout to a suitable value then you may find:

  • the problem goes away of its own accord and the session resumes before the timeout is reached

or

  • you receive a warning and have some time to identify the source of the problem and take the minimum action needed to allow the session to resume
Implementation

The parameter resumable_timeout is a general control for resumable sessions if you don’t handle the feature at a more granular level than the system.

By default this parameter is set to zero which translates into a default value of 7,200 seconds but that default doesn’t come into effect unless a session declares itself resumable. If you set the parameter to a non-zero value all session will automatically be operating as resumable sessions – and you’ll soon hear why you don’t want to do that.

The second enabling feature for resumable sessions is the resumable privilege – a session can’t control it’s own resumability unless the schema has been granted the resumable privilege – which may be granted through a role. If a session has the privilege it may set its own resumable_timeout, even if the system value is zero.

Assume we have set resumable_timeout to 10 (seconds) through the instance parameter file and restarted the instance. If we now issue (for example) the following ‘create table’ statement:


create table t1 (n1, v1 ) 
pctfree 90 pctused 10
tablespace tiny
as
select 
        rownum, cast(lpad('x',800) as varchar2(1000))
from    all_objects
where   rownum <= 20000
/

This will attempt to allocate 1 row per block for 20,000 blocks (plus about 1.5% for bitmap space management blocks) – and tablespace tiny lives up (or down) to its name, consisting of a single file of only 10,000 Oracle blocks. Shortly after starting, the session will hit Oracle error “ORA-01652: unable to extend temp segment by 128 in tablespace TINY”, but it won’t report it; instead it will suspend itself for 10 seconds before failing and reporting the error. This will happen whether or not the session has the resumable privilege – in this case the behaviour is dictated by our setting the system parameter. If you look in the alert log after the session finally errors out you will find text like the following:

2019-10-04T14:01:11.847943+01:00
ORCL(3):ORA-1652: unable to extend temp segment by 128 in tablespace TINY [ORCL] 
ORCL(3):statement in resumable session 'User TEST_USER(138), Session 373, Instance 1' was suspended due to
ORCL(3):    ORA-01652: unable to extend temp segment by 128 in tablespace TINY
2019-10-04T14:01:23.957586+01:00
ORCL(3):statement in resumable session 'User TEST_USER(138), Session 373, Instance 1' was timed out

Note that there’s a 10 (plus a couple) second gap between the point where the session reports that it is suspending itself and the point where it fails with a timeout. The two-extra seconds appear because the session polls every 2 seconds to see whether the problem is still present or whether it has spontaneously disappeared so allowing the session to resume.

Let’s change the game slightly; let’s try to create the table again, but this time execute the following statement first:

alter session enable resumable timeout 60 name 'Help I''m stuck';

The initial response to this will be Oracle error “ORA-01031: insufficient privileges” because the session doesn’t have the resumable privilege, but after granting resumable to the user (or a relevant role) we try again and find we will be allowed a little extra time before the CTAS times out. Our session now overrides the system timeout and will wait 60 seconds (plus a bit) before failing.The “timeout” clause is optional and if we omit it the session will use the system value, similarly the “name” clause is optional though there’s no default for it, it’s just a message that will get into various views and reports.

There are several things you might check in this 60 second grace period. The session wait history will confirm that your session has been timing out every two seconds (as will the active session history if you’re licensed to use it):


select seq#, event, wait_time from v$session_wait_history where sid = 373

      SEQ# EVENT							     WAIT_TIME
---------- ---------------------------------------------------------------- ----------
	 1 statement suspended, wait error to be cleared			   204
	 2 statement suspended, wait error to be cleared			   201
	 3 statement suspended, wait error to be cleared			   201
	 4 statement suspended, wait error to be cleared			   201
	 5 statement suspended, wait error to be cleared			   200
	 6 statement suspended, wait error to be cleared			   200
	 7 statement suspended, wait error to be cleared			   202
	 8 statement suspended, wait error to be cleared			   200
	 9 statement suspended, wait error to be cleared			   200
	10 statement suspended, wait error to be cleared			   200

Then there’s a special dynamic performance view, v$resumable which I’ve reported below using a print_table() procedure that Tom Kyte wrote many, many years ago to report rows in a column format:

SQL> set serveroutput on
SQL> execute print_table('select * from v$resumable where sid = 373')

ADDR                          : 0000000074515B10
SID                           : 373
ENABLED                       : YES
STATUS                        : SUSPENDED
TIMEOUT                       : 60
SUSPEND_TIME                  : 10/04/19 14:26:20
RESUME_TIME                   :
NAME                          : Help I'm stuck
ERROR_NUMBER                  : 1652
ERROR_PARAMETER1              : 128
ERROR_PARAMETER2              : TINY
ERROR_PARAMETER3              :
ERROR_PARAMETER4              :
ERROR_PARAMETER5              :
ERROR_MSG                     : ORA-01652: unable to extend temp segment by 128 in tablespace TINY
CON_ID                        : 0
-----------------
1 rows selected

Notice how the name column reports the name I supplied when I enabled the resumable session. The view also tells us when the critical statement was suspended and how long it is prepared to wait (in total) – leaving us to work out from the current time how much time we have left to work around the problem.

There’s also a dba_resumable variant of the view which is slightly more informative (though the sample below is not consistent with the one above because I ran the CTAS several times, editing the blog as I did so):

SQL> execute print_table('select * from dba_resumable where session_id = 373')

USER_ID                       : 138
SESSION_ID                    : 373
INSTANCE_ID                   : 1
COORD_INSTANCE_ID             :
COORD_SESSION_ID              :
STATUS                        : SUSPENDED
TIMEOUT                       : 60
START_TIME                    : 10/04/19 14:21:14
SUSPEND_TIME                  : 10/04/19 14:21:16
RESUME_TIME                   :
NAME                          : Help I'm stuck
SQL_TEXT                      : create table t1 (n1, v1 ) pctfree 90 pctused 10 tablespace tiny as  select rownum, 
                                cast(lpad('x',800) as varchar2(1000)) from all_objects where rownum <= 20000
ERROR_NUMBER                  : 1652
ERROR_PARAMETER1              : 128
ERROR_PARAMETER2              : TINY
ERROR_PARAMETER3              :
ERROR_PARAMETER4              :
ERROR_PARAMETER5              :
ERROR_MSG                     : ORA-01652: unable to extend temp segment by 128 in tablespace TINY
-----------------
1 rows selected

This view includes the text of the statement that has been suspended and shows us when it started running (so that we can decide whether we really want to rescue it, or might be happy to kill it to allow some other suspended session to resume).

If you look at the alert log in this case you’ll see that the name has been reported there instead of the user, session and instance – which means you might want to think carefully about how you use the name option:


2019-10-04T14:21:16.151839+01:00
ORCL(3):statement in resumable session 'Help I'm stuck' was suspended due to
ORCL(3):    ORA-01652: unable to extend temp segment by 128 in tablespace TINY
2019-10-04T14:22:18.655808+01:00
ORCL(3):statement in resumable session 'Help I'm stuck' was timed out

Once your resumable task has completed (or timed out and failed) you can stop the session from being resumable with the command:

alter session disable resumable;

And it’s important that every time you enable resumability you should disable it as soon as the capability is no longer needed. Also, be careful about when you enable it, don’t be tempted to make every session resumable. Use it only for really important cases. Once a session is resumable virtually everything that goes on in that session is deemed to be resumable, and this has side effects.

The first side effect that may spring to mind is the impact of the view v$resumable – it’s a memory structure in the SGA so that everyone can see it and all the resumable sessions can populate and update it. That means there’s got to be some latch (or mutex) protection going on – and if you look at v$latch you’ll discover that there;s just a single (child) latch doing the job, so resumability can introduce a point of contention. Here’s a simple script (using my “start_XXX” strategy to “select 1 from dual;” one thousand times, with calls to check the latch activity:

set termout off
set serveroutput off
execute snap_latch.start_snap

@start_1000

set termout on
set serveroutput on
execute snap_latch.end_snap(750)

And here are the results of running the script – reporting only the latches with more than 750 gets in the interval – first without and then with a resumable session:

---------------------------------
Latch waits:-   04-Oct 15:04:31
Lower limit:-  750
---------------------------------
Latch                              Gets      Misses     Sp_Get     Sleeps     Im_Gets   Im_Miss Holding Woken Time ms
-----                              ----      ------     ------     ------     -------   ------- ------- ----- -------
session idle bit                  6,011           0          0          0           0         0       0     0      .0
enqueue hash chains               2,453           0          0          0           0         0       0     0      .0
enqueue freelist latch                1           0          0          0       2,420         0       0     0      .0
JS queue state obj latch          1,176           0          0          0           0         0       0     0      .0

SQL> alter session enable resumable;

SQL> @test
---------------------------------
Latch waits:-   04-Oct 15:04:46
Lower limit:-  750
---------------------------------
Latch                              Gets      Misses     Sp_Get     Sleeps     Im_Gets   Im_Miss Holding Woken Time ms
-----                              ----      ------     ------     ------     -------   ------- ------- ----- -------
session idle bit                  6,011           0          0          0           0         0       0     0      .0
enqueue hash chains               2,623           0          0          0           0         0       0     0      .0
enqueue freelist latch                1           0          0          0       2,588         0       0     0      .0
resumable state object            3,005           0          0          0           0         0       0     0      .0
JS queue state obj latch          1,260           0          0          0           0         0       0     0      .0

PL/SQL procedure successfully completed.

SQL> alter session disable resumable;

That’s 1,000 selects from dual – 3,000 latch gets on a single child latch. It looks like every call to the database results in a latch get and an update to the memory structure. (Note: You wouldn’t see the same effect if you ran a loop inside an anonymous PL/SQL block since the block would be the single database call).

For other side effects with resumability think about what else is going on around your session. If you allow a session to suspend for (say) 3600 seconds and it manages to resume just in time to avoid a timeout it now has 3,600 seconds of database changes to unwind if it’s trying to produce a read-consistent result; so not only do you have to allow for increasing the size of the undo tablespace and increasing the undo retention time, you have to allow for the fact that when the process resumes it may run much more slowly than usual because it spends more of its time trying to see the data as it was before it suspended, which may require far more single block reads of the undo tablespace – and the session may then crash anyway with an Oracle error ORA-01555 (which is so well-known that I won’t quote the text).

In the same vein – if a process acquires a huge amount of space in the temporary tablespace (in particular) and fails instantly because it can’t get any more space it normally crashes and releases the space. If you allow that process to suspend for an hour it’s going to hold onto that space – which means other processes that used to run safely may now crash because they find there’s no free space left for them in the temporary tablespace.

Be very cautious when you introduce resumable sessions – you need to understand the global impact, not just the potential benefit to your session.

Getting Alerts

Apart from the (passive) views telling you that a session has suspended it’s also possible to get some form of (active) alert when the event happens. There’s an “after suspend” event that you can use to create a database trigger to take some defensive action, e.g.:

create or replace trigger call_for_help
after suspend
on test_user.schema
begin
        if sysdate between trunc(sysdate) and trunc(sysdate) + 3/24 then
                null;
                -- use utl_mail, utl_smtp et. al. to page the DBA
        end if;
end;
/

This trigger is restricted to the test_user schema, and (code not included) sends a message to the DBA’s pager only between the hours of midnight and 3:00 a.m. Apart from the usual functions in dbms_standard that returnn error codes, names of objects and so on you might want to take a look at the dbms_resumable package for the “helper” functions and procedures it supplies.

For further information on resumable sessions here’s a link to the 12.2 manual to get you started.

Video : Oracle REST Data Services (ORDS) : AutoREST

Tim Hall - Mon, 2019-10-07 02:05

Today’s video is a demonstration of the AutoREST feature of Oracle REST Data Services (ORDS).

This is based on the following article.

I also have a bunch of other articles here.

The star of today’s video is Connor McDonald of “600 slides in 45 minutes” fame, and more recently AskTom

Cheers

Tim…

Video : Oracle REST Data Services (ORDS) : AutoREST was first posted on October 7, 2019 at 8:05 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Free Oracle Cloud: 12. Create a 2nd Compute Instance and a Load Balancer

Dimitri Gielis - Fri, 2019-10-04 11:51
This post is part of a series of blog posts on the Best and Cheapest Oracle APEX hosting: Free Oracle Cloud.

In my blog post Create a VM Instance (Compute Cloud) we created a VM instance in the Free Oracle Cloud. The cool thing is that you get two VMs for free. In this post, we will set up the other always free compute instance.

Just like when we created our first instance, hit the Create a VM instance:


Give your instance a name and before I just hit the Create button, BUT this time you want to create the Show Shape, Network and Storage Options first:


The most important part of that screen is the "Assign public IP address" section. If you don't need this Compute instance to be accessible from the internet you can ignore it, but if you want to host a website, for example, you might want to check it. If you didn't do it, you can always add a public IP later, but I personally found it cumbersome and hard to understand the network piece. I had to do many different steps to get it to work to have an internet connection to that machine, while when you have a public IP address, Oracle does everything for you... anyway, it depends on your use case what you need, but I do want to highlight it. Also, it seems that the default changed from when I wrote the first post; by default, you don't have a public IP address. It might be that Oracle is trying to push you to use a Load Balancer (see later on in this blog post) and that might actually make sense.



Clicking the Create button will show that your instance is being provisioned.


When you go back to the overview you should see both of your Always Free Compute instances:


Clicking on the name, you will get the details. This screenshot shows when you don't specify a public IP address.


To access that machine, as it doesn't have a public IP, I connected to my first instance and from there, as I am on the subnet, I can connect to the Private IP Address:


An alternative for a URL to go directly to your VM instance is to front it with a Load Balancer.

Which brings us to the Load Balancer topic. With the Always Free Oracle Cloud, we also get a Load Balancer for free. There are different use cases for using a Load Balancer, but here are my own reasons why I have used a Load Balancer before:

  1. Distribute the traffic automatically over different machines. For example, when you use our APEX Office Print (AOP) Cloud you will actually hit our load balancer, behind the load balancer we have two to five different machines. It's not only to handle the large number of prints we get, but it also makes our lives easier when we want to upgrade without downtime. We upgrade one clone instance, and when done, new machines are brought online and old ones are shutdown. We patch our own service with zero downtime.
  2. The Load Balancer has the SSL certificate and handles the HTTPS requests while the backend servers have HTTP.
  3. On a Load Balancer, you have integrated health checks, so you can be warned when things go wrong, even when there's only one server behind the Load Balancer.

So lets get started to set up a Load Balancer in the Oracle Cloud:

Click on Networking > Load Balancers:


Click the Create Load Balancer button:


It will ask for a name and type. For the Always free instance, use Micro with Maximum Total Bandwidth.
By default Small is selected, so don't forget to change it:


Next you want to add a Backend to this Load Balancer, so click the Add Backends button:


In the pop-up you can select the instances you want to put behind this Load Balancer:


Furthermore, on the screen you can select a Health Check Policy:


In the next step, you can upload the SSL certificate, in case you want the Load Balancer to be accessible through HTTPS. You can also choose to just configure the Load Balancer for HTTP (which I don't recommend):


Hit the Create Load Balancer and you will get an overview that the Load Balancer is being created:


Once it's ready the icon turns green and you will see the Public IP Address of your Load Balancer:


Instead of putting the IP Address of your instance directly in the DNS of your domain name, you put the IP Address of the Load Balancer in.

A Load Balancer can do much more, you can have different Rules, SSL tunneling, etc. You can read more about that in the online documentation.

Hopefully, now you know how to set up a second compute instance and you have an idea what a Load Balancer can do for you.

We are almost done with this series... but you definitely want to read the next blog post, which is the last one where I give some important information to keep your Always Free instance running.
Categories: Development

Free Oracle Cloud: 11. Sending Emails with APEX_MAIL on ATP

Dimitri Gielis - Fri, 2019-10-04 08:38
This post is part of a series of blog posts on the Best and Cheapest Oracle APEX hosting: Free Oracle Cloud.

In this post, we will configure the Oracle Cloud to support our instances, databases and Oracle APEX to send out emails. In my blog post 5. Setup APEX in ATP and create first APEX app, I initially said you can't use APEX_MAIL in APEX in ATP, but I was wrong, so I few days after my post I updated it, to point you to the documentation with the configuration steps you have to do to make it work.

The reason I thought you can't use APEX_MAIL was that during my tests, sending emails failed. I hadn't read the documentation ;) In this post, I will share how I got it to work after all.

The first thing you have to do is create an SMTP Credential for your user. You do that by logging into your Oracle Cloud account, go to Identity > Users and select your user:



Click the SMTP Credentials in the left menu:


Hit the Generate SMTP Credentials button, and give it a name:


On the next screen, you will see the USERNAME and PASSWORD. Take a note of this as you won't get to see it anymore after:


You will come back to the overview screen, but again, as far as I can see there's no way to get the password again, so if you lose it, you need to set up one again:


Now we will let Oracle APEX know about those parameters. Login as ADMIN through SQL Developer for example (see this blog post on how to do that) and run the following statement:

BEGIN
  APEX_INSTANCE_ADMIN.SET_PARAMETER('SMTP_HOST_ADDRESS', 'smtp.us-ashburn-1.oraclecloud.com');
  APEX_INSTANCE_ADMIN.SET_PARAMETER('SMTP_USERNAME', 'ocid1.user.oc1..xxxxxxxxxxx@ocid1.tenancy.oc1..xxxxxxx');
  APEX_INSTANCE_ADMIN.SET_PARAMETER('SMTP_PASSWORD', 'Fxxxx');
  COMMIT;
END;
/


Here is a screenshot when running:



Log in to Oracle APEX, go to SQL Workshop and try to send out emails:


It says statement processed, but when you query the APEX_MAIL_QUEUE, you will see it's still stuck with an error ORA-29278: SMTP transient error: 471 Authorization failed:


There's one more step you have to do, specify the email addresses you want to allow to send emails from. Go in your Oracle Cloud Account Dashboard to Email Delivery and click the Email Approved Senders and hit the Create Approved Sender button




Add the email address you want to send emails from and hit the Create Approved Sender button:


In the overview you will see all allowed email addresses:


When we try to send again and check the APEX_MAIL_LOG, we see the emails are effectively sent:


That's it, you can now send emails out of your APEX apps :)

We are almost done with the series. In the next post we will create a second compute instance and set up a Load Balancer.
Categories: Development

opt_estimate catalogue

Jonathan Lewis - Fri, 2019-10-04 04:10

This is just a list of the notes I’ve written about the opt_estimate() hint.

  • opt_estimate – using the hint to affect index calculations: index_scan and index_filter
  • opt_estimate 2 – applying the hint to nested loop joins, options: nlj_index_scan and nlj_index_filter
  • opt_estimate 3 – a couple of little-known options for the hint, “group_by” and “having”.
  • opt_estimate 4 – applying the hint at the query block level: particularly useful for CTEs (“with subquery”) and non-mergeable views.
  • opt_estimate 5 – a story of failure: trying to use opt_estimate to push predicates into a union all view.

I have a couple more drafts on the topic awaiting completion, but if you know of any other articles that would be a good addition to the list feel free to reference them in the comments.

 

Urdu in AWS

Pakistan's First Oracle Blog - Fri, 2019-10-04 00:24
Urdu is arguably one of the most beautiful and poetic language on the planet. AWS Translate now supports Urdu along with 31 other languages, which is awesome.



AWS Translate is growing leaps and bounds and has matured quite a lot over the last few months. There are now hundreds of translations and its now available in all the regions.


Amazon Translate is a text translation service that uses advanced machine learning technologies to provide high-quality translation on demand. You can use Amazon Translate to translate unstructured text documents or to build applications that work in multiple languages.


Amazon Translate provides translation between a source language (the input language) and a target language (the output language). A source language-target language combination is known as a language pair.


As with other AWS products, there are no contracts or minimum commitments for using Amazon Translate.



Categories: DBA Blogs

How to do a quick health check of AWS RDS database

Pakistan's First Oracle Blog - Fri, 2019-10-04 00:16
Just because the database is on AWS RDS, it doesn't mean that it won't run slow or get stuck. So when your users complain about the slowness of your RDS database, do the following quick health check:
1- From AWS console, in RDS section, go to your database and then go to Logs and Events tab. From Logs, in case of Oracle check alert log, in case of SQL Server check Error log, for PostgreSQL check postgres log and error log for MySQL database. Check for any errors or warnings and proceed accordingly as per that database engine.


2- If you dont see any errors or warnings or if you want to check in addition, then first check which database instance type you are using. For example for one of my test Oracle databases, it is db.r4.4xlarge.


Go to https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html and check specifications of this instance type.

For instance, for db.r4.4xlarge, it is :


Instance Class vCPU, ECU, Memory (GiB), VPC Only, EBS Optimized, Max. Bandwidth (Mbps), Network Performance
db.r4.4xlarge 16  53  122   Yes   Yes    3,500     Up to 10 Gbps


So this db.r4.4xlarge has a max bandwidth (throughput) of 437.5 MB/s  (3500 Mbps/8 = 437.5 MB/s). The throughput limit is separate for read and write, which means you’ll get 437.5 MB/s for read and 437.5 MB/s for write.


3- Now go to Monitoring tab of this RDS in your console and check Read Throughput and Write Throughput to see if your instance is touching above threshold. For instance in this case 437.5. If yes, then you know that IO is the issue and you may need to either tune the SQLs responsible or increase instance size.


4- Similarly, from the same monitoring tab check for CPU usage, free memory available and Free storage space to make sure no other threshold is being reached.
5- Also check for Disk Queue Depth. The Disk Queue Depth is the number of IO requests waiting to be serviced. This time spent waiting in the queue is a component of latency and service time. Ideally disk queue depth of 15 or less should be good, but in case you notice latency greater than 10 or milliseconds accompanied by high disk queue depth than that could cause performance issues.


6- Last but not least, reviewing and tuning your SQLs is the biggest optimization gain you can achieve whether your database is in RDS or not.


Hope it helps.
Categories: DBA Blogs

Documentum – Large documents in xPlore indexing

Yann Neuhaus - Thu, 2019-10-03 14:30

Documentum is using xPlore for the Full Text indexing/search processes. If you aren’t very familiar with how xPlore is working, you might want to know how it is possible to index large documents or you might be confused about some documents not being indexed (and therefore not searchable). In this blog, I will try to explain how xPlore can be configured to be able to index these big documents without causing too much trouble because by default, these documents are just not indexed which might be an issue. Documents tend to be bigger and bigger and therefore the thresholds for the xPlore indexing might be a little bit outdated…

In this blog, I will go through all thresholds that can be configured on the different components and I will try to explain a little bit what it’s all about. Before starting, I believe a very short (and definitively not exhaustive) introduction on the indexing process of the xPlore is therefore required. As soon as you install an IndexAgent, it will trigger the creation of several things on the associated repository, including new events registration on the ‘dmi_registry‘. When working with documents, these events (‘dm_save‘, ‘dm_saveasnew‘, …) will generate new entries in the ‘dmi_queue_item‘. The IndexAgent will then access the ‘dmi_queue_item‘ and retrieve the document that needs indexing (add/update/remove from index). Then from here a CPS is called and processing the document (language identification, text extraction, tokenization, lemmatization, stemming, …). My point here is that there are two main sides of the indexing process: the IndexAgent and then the CPS. This is also true for the thresholds: you will need to configure them properly on both sides.

 

I. IndexAgent

 
On the IndexAgent side, there isn’t much configuration possible strictly related to the size of documents since there is only one but it’s kind of the most important one since it’s the first barrier that will block your indexing if not configured properly.

In the file indexagent.xml (found under $JBOSS_HOME/server/DctmServer_Indexagent/deployments/IndexAgent.war/WEB-INF/classes), in the exporter section, you can find the parameter ‘contentSizeLimit‘. This parameter controls the maximum size of a document that can be send to indexing. This is the real size of the document (‘content_size‘/’full_content_size‘); it is not the size of its text once extracted. The reason for that is simple: this limit is on the IndexAgent size and the text hasn’t been extracted yet so the IndexAgent do not know how big the extracted text will be. If the size of the document exceeds the value defined for ‘contentSizeLimit‘, then the IndexAgent will not even try to process it, it will just reject it and in this case, you will see a message that the document exceeded the limit on both the IndexAgent logs as well as in the ‘dmi_queue_item‘ object. Other documents of the same batch aren’t impacted, the parameter ‘contentSizeLimit‘ is for each and every document. The default value for this parameter is 20 000 000 bytes (19,07 MB).

If you are going to change this value, then you might need some other updates. You can tweak some other parameters if you are seeing issues while indexing large documents, all of them can be configured inside this indexagent.xml file. For example, you might want to look at the ‘content_clean_interval‘ (in milliseconds) which controls when the export of the document (dftxml document) will be removed from the staging area of the IndexAgent (location of ‘local_content_area‘). If the value is too small, then the CPS might try to retrieve a file to process it for indexing but the IndexAgent might have removed the file already. The default value for this parameter is 1 200 000 (20 minutes).

 

II. CPS

 
On the CPS side, you can look at several other size related parameters. You can find these parameters (and many others) in two main locations. The first is global to the Federation: indexserverconfig.xml (found under $XPLORE_HOME/config by default but you can change it (E.g.: a shared location for a Multi-Node FT)). The second one is a CPS-specific configuration file: PrimaryDsearch_local_configuration.xml for a PrimaryDsearch or <CPS_Name>_configuration.xml for a CPS Only (found under $XPLORE_HOME/dsearch/cps/cps_daemon/).

The first parameter to look for is ‘max_text_threshold‘. This parameter controls the maximum text size of a document. This is the size of its text after extraction; it is not the real size of the document. If the text size of the document exceeds the value defined for ‘max_text_threshold‘, then the CPS will act according to the value defined for the ‘cut_off_text‘. With a ‘cut_off_text‘ set to true, the documents that exceed ‘max_text_threshold‘ will have the first ‘max_text_threshold‘ MB indexed but the CPS will stop once it reached the limit. In this case the CPS log will contain something like ‘doc**** is partially processed’ and the dftxml of this document will contain the mention ‘partialIndexed‘. This means that the CPS stopped at the limit defined and therefore the index might be missing some content. With a ‘cut_off_text‘ set to false (default value), the documents that exceed ‘max_text_threshold‘ will be rejected and therefore not full text indexed at all (only metadata is indexed). Other documents of the same batch aren’t impacted, the parameter ‘max_text_threshold‘ is for each and every document. The default value for this parameter is 10 485 760 bytes (10 MB) and the maximum value possible is 2 147 483 648 (2 GB).

The second parameter to look for is ‘max_data_per_process‘. This parameter controls the maximum text size that a CPS Batch should handle. The CPS is indexing documents/items per batches (‘CPS-requests-batch-size‘). By default, a CPS will process up to 5 documents per batch but, if I’m not mistaken, it can be less if there isn’t enough documents to process. If the total text size to be processed by the CPS for the complete batch is above ‘max_data_per_process‘, then the CPS will reject the full batch and it will therefore not full text index the content of any of the documents. This is going to be an issue if you increased the previous parameters but miss/forget this one. Indeed, you might end-up with very small documents not indexed because they were in a batch containing some big documents. To be sure that this parameter doesn’t block any batch, you can set it to ‘CPS-requests-batch-size‘*’max_text_threshold‘. The default value for this parameter is 31 457 280 bytes (30 MB) and the maximum value possible is 2 147 483 648 (2 GB).

As for the IndexAgent, if you are going to change these values, then you might need some other updates. There are a few timeouts values like ‘request_time_out‘ (default 600 seconds), ‘text_extraction_time_out‘ (between 60 and 300 – default to 300 seconds) or ‘linguistic_processing_time_out‘ (between 60 and 360 – default to 360 seconds) that are probably going to be exceeded if you are processing large documents so you might need to tweak these values.

 

III. Summary

 

Parameter Limit on Short Description Default Value Sample Value contentSizeLimit IndexAgent (indexagent.xml) Maximum size of document 20 000 000 bytes (19,07 MB) 104 857 600 bytes (100 MB) max_text_threshold CPS (*_configuration.xml) Maximum text size of the document’s content 10 485 760 bytes (10 MB) 41 943 040 bytes (40 MB) max_data_per_process CPS (*_configuration.xml) Maximum text size of the CPS batch 31 457 280 bytes (30 MB) 5*41 943 040 bytes = 209 715 200 (200 MB)

 
In summary, the first factor to consider is ‘contentSizeLimit‘ on the IndexAgent side. All documents with a size (document size) bigger than ‘contentSizeLimit‘ won’t be submitted to full text indexing, they will be skipped. The second factor is then either ‘max_text_threshold‘ or ‘max_data_per_process‘ or both, it depends which size you assigned them. They both rely on text size after extraction and they can both cause a document (or the batch) to be rejected from indexing.

Increasing the size thresholds is a somewhat complex exercise that needs careful thinking and alignment of numerous satellite parameters so that they can all work together without disrupting the performance or stability of the xPlore processes. These satellite parameters can be timeouts, cleanup, batch size, request size or even JVM size.

 

Cet article Documentum – Large documents in xPlore indexing est apparu en premier sur Blog dbi services.

How to rename an oracle scheduler job

Flavio Casetta - Thu, 2019-10-03 14:07
Categories: DBA Blogs

How Can you Use Audience Targeting to Show your Ads?

VitalSoftTech - Thu, 2019-10-03 10:10

Audience targeting has been a crucial tool in an advertiser’s arsenal for a long time. You can use audience targeting to show your ads to the exact people you’re targeting. What’s more, you can show it to them at the exact time you want. As advertisers, we believe that this is one of the core […]

The post How Can you Use Audience Targeting to Show your Ads? appeared first on VitalSoftTech.

Categories: DBA Blogs

Online Wine Retailer Fine-Tunes Inventory and Speeds Delivery with Oracle Cloud

Oracle Press Releases - Thu, 2019-10-03 09:00
Blog
Online Wine Retailer Fine-Tunes Inventory and Speeds Delivery with Oracle Cloud

By Guest Author, Oracle—Oct 3, 2019

Enjoying wine may be an age-old pleasure, but making sure online buyers get the right wine delivered promptly is a very modern concern.

Vinomofo, founded in 2011 in an Adelaide garage with a focus on making great wines available to wine lovers and to help great wine makers grow. The company now serves half a million wine buyers in Australia, New Zealand, and Singapore. Vinomofo initially focused on marketing, customer interaction and sales while relying on a third-party to handle warehousing, logistics, stock levels and distribution.

While the service from the third party supplier was great, it was not a scalable solution. Vinomofo decided it had to take back the control of the back-office tasks, but with the help of a technology partner.

In light of this strategic change, it turned to Oracle Warehouse Management Cloud (WMS), giving Vinomofo a better handle on key data such as inventory levels and reporting. The Oracle cloud-based system lets the company check key performance indicators (KPIs) to make sure employees and the system itself meet expectations.

These automated processes enable Vinomofo to focus on the wine itself and the company’s 550,000 strong member base, rather than worrying about the logistics and back of house operations. Since using WMS, Vinomofo has been able to deliver wine three times faster—we’ll raise a glass to that!

It also allowed Vinomofo to start a new “click-and-collect” service in its Melbourne distribution center in three weeks of using the software. The system will also enable customers to mix their own cases, something Vinomofo is excited to launch within the next 12 months.

Use of the Oracle technology has improved both the accuracy of inventory stock checks and made it easier to offer same-day shipping.

According to Krista Diez-Simson, CFO and COO Vinomofo, “Oracle cloud has enabled us to focus on quality and curation whilst warehousing and distribution now take care of themselves.” 

Watch the Vinomofo Video

In this video, Krista Diez-Simson, CFO and COO of Vinomofo, shares how Oracle Cloud is helping the company focus on what it does best—delivering wine.

embedBrightcove('responsive', false, 'single', '6002758155001');

 

Read More Oracle Cloud Customer Stories

Vinomofo is one of the thousands of customers from around the world on its journey to cloud. Read about others in Stories from Oracle Cloud: Business Successes

Trace Files

Jonathan Lewis - Thu, 2019-10-03 07:38

A recent blog note by Martin Berger about reading trace files in 12.2 poped up in my twitter timeline yesterday and reminded me of a script I wrote a while ago to create a simple view I could query to read the tracefile generated by the current session while the session was still connected. You either have to create the view and a public synonym through the SYS schema, or you have to use the SYS schema to grant select privileges on several dynamic performance views to the user to allow the user to create the view in the user’s schema. For my scratch database I tend to create the view in the SYS schema.

Script to be run by SYS:

rem
rem     Script: read_trace_122.sql
rem     Author: Jonathan Lewis
rem     Dated:  Sept 2018
rem
rem     Last tested
rem             12.2.0.1

create or replace view my_trace_file as
select 
        *
from 
        v$diag_trace_file_contents
where
        (adr_home, trace_filename) = (
                select
                --      substr(tracefile, 1, instr(tracefile,'/',-1)-1),
                        substr(
                                substr(tracefile, 1, instr(tracefile,'/',-1)-1),
                                1,
                                instr(
                                        substr(tracefile, 1, instr(tracefile,'/',-1)),
                                        'trace'
                                ) - 2
                        ),
                        substr(tracefile, instr(tracefile,'/',-1)+1) trace_filename
                from 
                        v$process
                where   addr = (
                                select  paddr
                                from    v$session
                                where   sid = (
                                        sys_context('userenv','sid')
                                        -- select sid from v$mystat where rownum = 1
                                        -- select dbms_support.mysid from dual
                                )
                        )
        )
;


create public synonym my_trace_file for sys.my_trace_file;
grant select on my_trace_file to {some role};

Alternatively, the privileges you could grant to a user from SYS so that they could create their own view:


grant select on v_$process to some_user;
grant select on v_$session to some_user;
grant select on v_$diag_trace_file_contents to some_user;
and optionally one of:
        grant select on v_$mystat to some_user;
        grant execute on dbms_support to some_user;
                but dbms_support is no longer installed by default.

The references to package dbms_support and view v$mystat are historic ones I have lurking in various scripts from the days when the session id (SID) wasn’t available in any simpler way.

Once the view exists and is available, you can enable some sort of tracing from your session then query the view to read back the trace file. For example, here’s a simple “self-reporting” (it’s going to report the trace file that it causes) script that I’ve run from 12.2.0.1 as a demo:


alter system flush shared_pool;
alter session set sql_trace true;

set linesize 180
set trimspool on
set pagesize 60

column line_number      format  999,999
column piece            format  a150    
column plan             noprint
column cursor#          noprint

break on plan skip 1 on cursor# skip 1

select
        line_number,
        line_number - row_number() over (order by line_number) plan,
        substr(payload,1,instr(payload,' id=')) cursor#,
        substr(payload, 1,150) piece
from
        my_trace_file
where
        file_name = 'xpl.c'
order by
        line_number
/

alter session set sql_trace false;

The script flushes the shared pool to make sure that it’s going to trigger some recursive SQL then enables a simple SQL trace. The query then picks out all the lines in the trace file generated by code in the Oracle source file xpl.c (execution plans seems like a likely guess) which happens to pick out all the STAT lines in the trace (i.e. the ones showing the execution plans).

I’ve used the “tabibitosan” method to identify all the lines that belong to a single execution plan by assuming that they will be consecutive lines in the output starting from a line which includes the text ” id=1 “ (the surrounding spaces are important), but I’ve also extracted the bit of the line which includes the cursor number (STAT #nnnnnnnnnnnnnnn) because two plans may be dumped one after the other if multiple cursors close at the same time. There is still a little flaw in the script because sometimes Oracle will run a sys-recursive statement in the middle of dumping a plan to turn an object_id into an object_name, and this will cause a break in the output.

The result of the query is to extract all the execution plans in the trace file and print them in the order they appear – here’s a sample of the output:


LINE_NUMBER PIECE
----------- ------------------------------------------------------------------------------------------------------------------------------------------------------
         38 STAT #140392790549064 id=1 cnt=0 pid=0 pos=1 obj=18 op='TABLE ACCESS BY INDEX ROWID BATCHED OBJ$ (cr=3 pr=0 pw=0 str=1 time=53 us cost=4 size=113 card
         39 STAT #140392790549064 id=2 cnt=0 pid=1 pos=1 obj=37 op='INDEX RANGE SCAN I_OBJ2 (cr=3 pr=0 pw=0 str=1 time=47 us cost=3 size=0 card=1)'


         53 STAT #140392790535800 id=1 cnt=1 pid=0 pos=1 obj=0 op='MERGE JOIN OUTER (cr=5 pr=0 pw=0 str=1 time=95 us cost=2 size=178 card=1)'
         54 STAT #140392790535800 id=2 cnt=1 pid=1 pos=1 obj=4 op='TABLE ACCESS CLUSTER TAB$ (cr=3 pr=0 pw=0 str=1 time=57 us cost=2 size=138 card=1)'
         55 STAT #140392790535800 id=3 cnt=1 pid=2 pos=1 obj=3 op='INDEX UNIQUE SCAN I_OBJ# (cr=2 pr=0 pw=0 str=1 time=11 us cost=1 size=0 card=1)'
         56 STAT #140392790535800 id=4 cnt=0 pid=1 pos=2 obj=0 op='BUFFER SORT (cr=2 pr=0 pw=0 str=1 time=29 us cost=0 size=40 card=1)'
         57 STAT #140392790535800 id=5 cnt=0 pid=4 pos=1 obj=73 op='TABLE ACCESS BY INDEX ROWID TAB_STATS$ (cr=2 pr=0 pw=0 str=1 time=10 us cost=0 size=40 card=1)
         58 STAT #140392790535800 id=6 cnt=0 pid=5 pos=1 obj=74 op='INDEX UNIQUE SCAN I_TAB_STATS$_OBJ# (cr=2 pr=0 pw=0 str=1 time=8 us cost=0 size=0 card=1)'


         84 STAT #140392791412824 id=1 cnt=1 pid=0 pos=1 obj=20 op='TABLE ACCESS BY INDEX ROWID BATCHED ICOL$ (cr=4 pr=0 pw=0 str=1 time=25 us cost=2 size=54 card
         85 STAT #140392791412824 id=2 cnt=1 pid=1 pos=1 obj=42 op='INDEX RANGE SCAN I_ICOL1 (cr=3 pr=0 pw=0 str=1 time=23 us cost=1 size=0 card=2)'


         94 STAT #140392790504512 id=1 cnt=2 pid=0 pos=1 obj=0 op='SORT ORDER BY (cr=7 pr=0 pw=0 str=1 time=432 us cost=6 size=374 card=2)'
         95 STAT #140392790504512 id=2 cnt=2 pid=1 pos=1 obj=0 op='HASH JOIN OUTER (cr=7 pr=0 pw=0 str=1 time=375 us cost=5 size=374 card=2)'
         96 STAT #140392790504512 id=3 cnt=2 pid=2 pos=1 obj=0 op='NESTED LOOPS OUTER (cr=4 pr=0 pw=0 str=1 time=115 us cost=2 size=288 card=2)'
         97 STAT #140392790504512 id=4 cnt=2 pid=3 pos=1 obj=19 op='TABLE ACCESS CLUSTER IND$ (cr=3 pr=0 pw=0 str=1 time=100 us cost=2 size=184 card=2)'
         98 STAT #140392790504512 id=5 cnt=1 pid=4 pos=1 obj=3 op='INDEX UNIQUE SCAN I_OBJ# (cr=2 pr=0 pw=0 str=1 time=85 us cost=1 size=0 card=1)'
         99 STAT #140392790504512 id=6 cnt=0 pid=3 pos=2 obj=75 op='TABLE ACCESS BY INDEX ROWID IND_STATS$ (cr=1 pr=0 pw=0 str=2 time=8 us cost=0 size=52 card=1)'
        100 STAT #140392790504512 id=7 cnt=0 pid=6 pos=1 obj=76 op='INDEX UNIQUE SCAN I_IND_STATS$_OBJ# (cr=1 pr=0 pw=0 str=2 time=7 us cost=0 size=0 card=1)'
        101 STAT #140392790504512 id=8 cnt=0 pid=2 pos=2 obj=0 op='VIEW  (cr=3 pr=0 pw=0 str=1 time=47 us cost=3 size=43 card=1)'
        102 STAT #140392790504512 id=9 cnt=0 pid=8 pos=1 obj=0 op='SORT GROUP BY (cr=3 pr=0 pw=0 str=1 time=44 us cost=3 size=15 card=1)'
        103 STAT #140392790504512 id=10 cnt=0 pid=9 pos=1 obj=31 op='TABLE ACCESS CLUSTER CDEF$ (cr=3 pr=0 pw=0 str=1 time=21 us cost=2 size=15 card=1)'
        104 STAT #140392790504512 id=11 cnt=1 pid=10 pos=1 obj=30 op='INDEX UNIQUE SCAN I_COBJ# (cr=2 pr=0 pw=0 str=1 time=11 us cost=1 size=0 card=1)'


        116 STAT #140392791480168 id=1 cnt=4 pid=0 pos=1 obj=0 op='SORT ORDER BY (cr=3 pr=0 pw=0 str=1 time=62 us cost=3 size=858 card=13)'
        117 STAT #140392791480168 id=2 cnt=4 pid=1 pos=1 obj=21 op='TABLE ACCESS CLUSTER COL$ (cr=3 pr=0 pw=0 str=1 time=24 us cost=2 size=858 card=13)'
        118 STAT #140392791480168 id=3 cnt=1 pid=2 pos=1 obj=3 op='INDEX UNIQUE SCAN I_OBJ# (cr=2 pr=0 pw=0 str=1 time=11 us cost=1 size=0 card=1)'


        126 STAT #140392789565328 id=1 cnt=1 pid=0 pos=1 obj=14 op='TABLE ACCESS CLUSTER SEG$ (cr=3 pr=0 pw=0 str=1 time=21 us cost=2 size=68 card=1)'
        127 STAT #140392789565328 id=2 cnt=1 pid=1 pos=1 obj=9 op='INDEX UNIQUE SCAN I_FILE#_BLOCK# (cr=2 pr=0 pw=0 str=1 time=12 us cost=1 size=0 card=1)'


        135 STAT #140392789722208 id=1 cnt=1 pid=0 pos=1 obj=18 op='TABLE ACCESS BY INDEX ROWID BATCHED OBJ$ (cr=3 pr=0 pw=0 str=1 time=22 us cost=3 size=51 card=
        136 STAT #140392789722208 id=2 cnt=1 pid=1 pos=1 obj=36 op='INDEX RANGE SCAN I_OBJ1 (cr=2 pr=0 pw=0 str=1 time=16 us cost=2 size=0 card=1)'


        153 STAT #140392792055264 id=1 cnt=1 pid=0 pos=1 obj=68 op='TABLE ACCESS BY INDEX ROWID HIST_HEAD$ (cr=3 pr=0 pw=0 str=1 time=25 us)'
        154 STAT #140392792055264 id=2 cnt=1 pid=1 pos=1 obj=70 op='INDEX RANGE SCAN I_HH_OBJ#_INTCOL# (cr=2 pr=0 pw=0 str=1 time=19 us)'

If you want to investigate further, the “interesting” columns in the underlying view are probably: section_name, component_name, operation_name, file_name, and function_name. The possible names of functions, files, etc. vary with the trace event you’ve enabled.

 

Accenture and Oracle Help Insurers Meet IFRS 17 and LDTI Accounting Standards

Oracle Press Releases - Wed, 2019-10-02 15:00
Press Release
Accenture and Oracle Help Insurers Meet IFRS 17 and LDTI Accounting Standards New solution enables insurers to more effectively digitize their financial information and improve financial reporting of their insurance contracts

NEW YORK—Oct 2, 2019

Accenture (NYSE: ACN) and Oracle today unveiled an integrated technology and services offering that enables insurers to digitize their financial information and meet accounting reporting standards for the International Accounting Standard Board’s (IASB) IFRS 17 and US GAAP Long Duration Targeted Improvements (LDTI).
 
Combining Oracle’s IFRS 17 and Finance solutions with Accenture’s deep expertise and capabilities, the offering includes preconfigured templates, rules and reports to help insurers address the complexity associated with these accounting standards and ensure compliance.
 
Effective Jan. 1, 2022, IFRS 17 provides new standards for how insurers recognize revenue tied to their insurance contracts. Similarly, LDTI changes the assumptions insurers use to measure the liability of future policy benefits for traditional insurance contracts.
 
Many industry observers consider these new accounting standards to be the most challenging financial reporting changes for insurers in decades, requiring innovative approaches using new technology. Together, Accenture and Oracle have created a solution that drive benefits beyond the finance function to the entire enterprise. Recent Accenture research revealed that CFOs who fully harness the power of data and new technologies can drive value, improve efficiency and empower CEOs with strategic insights.
 
“The changes required from IFRS 17 provide an opportunity for insurers to capitalize on their rich repositories of data and derive more value from advanced analytics,” said Steve Culp, a senior managing director at Accenture and global head of the company’s Finance & Risk practice. “This joint offering provides the first steps on the journey to a full digitization of finance, enabling insurance CFOs to drive the enterprise strategy beyond the borders of the finance function.”
 
Sonny Singh, general manager and senior vice president, Oracle Financial Services, added, “IFRS 17/LDTI Standards are a historic change to insurance revenue recognition requirements. It presents a unique opportunity for insurers to align and transform disparate business processes and gain greater operational efficiency and insight. Oracle’s modern integrated solution, combined with Accenture’s comprehensive services, will allow adoption of IFRS 17 standards and establish the baseline for a wider insurance modernization initiative.”

The new solution is built into Accenture myConcerto, a fully integrated digital platform that brings together Accenture’s most disruptive thinking, leading industry solutions and Oracle technologies to drive enterprise transformation. It is one of the latest offerings developed through the longstanding Accenture and Oracle alliance, which has spanned more than 25 years. Accenture has been one of Oracle’s leading systems integration partners globally, with more than 54,000 Oracle-skilled consultants around the world who help clients accelerate digital transformation by implementing Oracle-based business solutions and new business processes that develop and evolve as their digital business grows. Accenture is a Global Cloud Elite and Platinum level member of Oracle PartnerNetwork (OPN) and is certified as an Oracle Cloud Excellence Implementer.
 
More information about the offering can be found here.

Contact Info
Judi Palmer
Oracle
+1 650 784 7901
judi.palmer@oracle.com
Michael McGinn
Accenture
+1 917 452 9458
m.mcginn@accenture.com
About Accenture

Accenture is a leading global professional services company, providing a broad range of services and solutions in strategy, consulting, digital, technology and operations. Combining unmatched experience and specialized skills across more than 40 industries and all business functions—underpinned by the world’s largest delivery network—Accenture works at the intersection of business and technology to help clients improve their performance and create sustainable value for their stakeholders. With 492,000 people serving clients in more than 120 countries, Accenture drives innovation to improve the way the world works and lives. Visit us at www.accenture.com.

About Oracle Financial Services

Oracle Financial Services provides solutions for retail banking, corporate banking, payments, asset management, life insurance, annuities and healthcare payers. With our comprehensive set of integrated digital and data platforms, banks and insurers are empowered to deliver next generation financial services. Our intelligent and open digital solutions enable customer-centric transformation, support collaborative innovation and drive efficiency. Our data and analytical platforms help financial institutions drive customer insight, integrated risk and finance, fight financial crime and comply with regulations. To learn more about Oracle Financial Services visit our website. Click here to learn more about Oracle IFRS 17 solutions.

Oracle PartnerNetwork

Oracle PartnerNetwork (OPN) is Oracle’s partner program that provides partners with a differentiated advantage to develop, sell and implement Oracle solutions. OPN offers resources to train and support specialized knowledge of Oracle’s products and solutions and has evolved to recognize Oracle’s growing product portfolio, partner base and business opportunity. Key to the latest enhancements to OPN is the ability for partners to be recognized and rewarded for their investment in Oracle Cloud. Partners engaging with Oracle will be able to differentiate their Oracle Cloud expertise and success with customers through the OPN Cloud program—an innovative program that complements existing OPN program levels with tiers of recognition and progressive benefits for partners working with Oracle Cloud. To find out more visit: http://www.oracle.com/partners.

Talk to a Press Contact

Judi Palmer

  • +1 650 784 7901

Michael McGinn

  • +1 917 452 9458

Pages

Subscribe to Oracle FAQ aggregator