All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

N.A
Thanks I found this solution insightful and helpful for a similar scenario I am working on.
Hi All, is there any steps to follow to ingest transactional data from TIBCO database to Splunk without any add-on's 
How quickly that sw update node’s time to correct after hibernation? Basically after that UF’s cron schedule should work as expected. If not then I propose that you should create a support case to Spl... See more...
How quickly that sw update node’s time to correct after hibernation? Basically after that UF’s cron schedule should work as expected. If not then I propose that you should create a support case to Splunk.
@isoutamo, No, I have not tried with spath, Could you please guide me with that.   I tried with the below, its showing events, but not getting the transaction level information index="<indexname>... See more...
@isoutamo, No, I have not tried with spath, Could you please guide me with that.   I tried with the below, its showing events, but not getting the transaction level information index="<indexname>" source = "user1" OR source = "user2" "<ProcessName>" "Exception occurred" | spath | table _time JobId TransactionId _raw | search JobId=* | append [ search index="<indexname>" source = "user1" OR source = "user2" | spath | search JobId=* | table _time JobId TransactionId _raw ] | stats dc(TransactionId) as UniqueTransactionCount values(TransactionId) as UniqueTransactions by JobId
This from the documentation Best practices for creating chain searches Use these best practices to make sure that chain searches work as expected. Use a transforming base search A base search sho... See more...
This from the documentation Best practices for creating chain searches Use these best practices to make sure that chain searches work as expected. Use a transforming base search A base search should be a transforming search that returns results formatted as a statistics table. For example, searches using the following commands are transforming searches: stats, chart, timechart, and geostats, among others. For more information on transforming commands, see About transforming commands in the Search Manual. https://docs.splunk.com/Documentation/SplunkCloud/latest/DashStudio/dsChain#Best_practices_for_creating_chain_searches  
N.A  
Have you try a spath command as you have json data in use?
@bowesmana, I have tried with that, but not getting any results. Actually I am trying to match the jobid from the below message. And using this jobId I have get other records which are all matching ... See more...
@bowesmana, I have tried with that, but not getting any results. Actually I am trying to match the jobid from the below message. And using this jobId I have get other records which are all matching with this jobid  
Here is one old post how to write "SQL Joins" in Splunk SPL https://community.splunk.com/t5/Splunk-Search/What-is-the-relation-between-the-Splunk-inner-left-join-and-the/m-p/391288/thread-id/113948
Those seems to be same as on prem 500,000 events and 30s (I think that this was earlier 60s, but seems to be same in on-prem too). See https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/Sav... See more...
Those seems to be same as on prem 500,000 events and 30s (I think that this was earlier 60s, but seems to be same in on-prem too). See https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/Savedsearches#Use_a_transforming_base_search Based on those you have exceeded both limits. I suppose that event limit is much more important and this could be the reason why it didn't work as expected. 
Thank you for your response. I'll take an example to explain it better let's say we have two events entries in splunk as below 1. ****1111222*abcabcabac*ERROR*<time> Logging server excep... See more...
Thank you for your response. I'll take an example to explain it better let's say we have two events entries in splunk as below 1. ****1111222*abcabcabac*ERROR*<time> Logging server exception... Error code: 6039 Error description: An internal message proxy HTTP error occurred when the request was being processed Parameter: 504 Gateway Time-out 2. ****1111222*xyzxyz*0078*ERROR*<time> ExecuteFactoryJob: Caught soap exception. Java factory ID: 3910059732_3_0_223344 Request failed after tries = 1 So now these two are different events entries in splunk which can be fetched by query1 and query2 separately. Now first I checked for 504 Gateway timeout with error code 6039 and took the thread_id (111122 in above example) by using rex and now using this thread_id I look for 2nd event entries as shown in above example and if it is found then return 1 single event back as result containing both events in raw message. It's like inner join. if either event is not present then it should not be returned. I tried using join as well but it didn't work. Tried your query it is giving result some of which contains a single event and some contains both events grouped (which is expected). All the result events individually should contain raw_message of both examples For above example result should be 1 single event like below: ****1111222*abcabcabac*ERROR*<time> Logging server exception... Error code: 6039 Error description: An internal message proxy HTTP error occurred when the request was being processed Parameter: 504 Gateway Time-out ****1111222*xyzxyz*0078*ERROR*<time> ExecuteFactoryJob: Caught soap exception. Java factory ID: 3910059732_3_0_223344 Request failed after tries = 1 @isoutamo  @ITWhisperer 
@isoutamo : The base search returns 66,449,351 events for the last 1day (earliest=-1d@d and latest=now) and completes in 37.51 seconds. We are using Splunk Cloud in our environment, what are the limi... See more...
@isoutamo : The base search returns 66,449,351 events for the last 1day (earliest=-1d@d and latest=now) and completes in 37.51 seconds. We are using Splunk Cloud in our environment, what are the limit count numbers a base search can process ? Could you please share this. I will try modifying my search as per your suggestion and update.
@ITWhisperer : Thanks for your reply. The primary purpose of using a base search with post-processing searches is to minimize search runtime and ensure the dashboard panels load quickly. While the... See more...
@ITWhisperer : Thanks for your reply. The primary purpose of using a base search with post-processing searches is to minimize search runtime and ensure the dashboard panels load quickly. While the fields command retains the necessary fields for post-processing, it is not producing accurate results in this case. Although replacing fields with the table command yields accurate results, it significantly increases resource usage and search completion time, negatively impacting dashboard performance. Any specific reason why fields command is not giving accurate results? Regards VK
Hi How many events base search is returning and how long it takes to finish? There are limits for those. Quite probably you have hit by those? When I look your base and post search you could modify... See more...
Hi How many events base search is returning and how long it takes to finish? There are limits for those. Quite probably you have hit by those? When I look your base and post search you could modify your base search to include stats there which is the recommended way to use it. index=myindex TERM(keyword) fieldname1="EXIT" | bin _time span=1d | stats count as Total, count(eval(httpStatusCde!="200" OR statusCde!="0000")) as failures, exactperc95(respTime) as p95RespTime by _time EId Then both post searches something like this | search EId="5eb2aee9" | stats count as Total, count(failures) as failures, first(p95RespTime) as p95RespTime by _time | eval "FailureRate"= round((failures/Total)*100,2) | table _time, Total, FailureRate, p95RespTime | sort -_time  r. Ismo
I’m not sure if this is working also on free license, but you could try settings - licensing - usage report There should be statistics of license usage. But if you are using free and it’s not locked ... See more...
I’m not sure if this is working also on free license, but you could try settings - licensing - usage report There should be statistics of license usage. But if you are using free and it’s not locked then you are using less than it’s maximum which is 500MB/day
Try changing your base search so that it ends with a tables command rather than fields command. Also, your Eid is different in your two post-processing searches.
Is there somewhere I can see how much am I spending on my searches (compute) rather than data (storage)?
The minimum SCP license size is 5gb per day. You get it’s price from your local splunk partner or directly from splunk.
Using wildcards at the beginning and end of search strings is not necessary (or advised) and if you can narrow your search of indexes, that might improve matters. As @isoutamo says, using _time in th... See more...
Using wildcards at the beginning and end of search strings is not necessary (or advised) and if you can narrow your search of indexes, that might improve matters. As @isoutamo says, using _time in the by clause may not give you what you expect as you will get a different result event (row) for each _time, thread id combination. Also, AND is implied in searches and therefore unnecessary in this instance. Try something like this index="wfd-rpt-app" ("504 Gateway Time-out" "Error code: 6039") OR "ExecuteFactoryJob: Caught soap exception" | rex field=_raw "\*{4}(?<thread_id>\d+)\*" | stats values(_raw) as raw_messages by thread_id | table thread_id, raw_messages The time of each of the events is likely to be in the _raw message, but if you want that broken out in some way, please provide some sample raw event data (anonymised appropriately) and a description / example of your expected results.