All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Maybe not a common TA or app, but Splunk App for Stream uses kvstore. Found this out recently doing some troubleshooting. So on stream servers make sure in server.conf to set [kvstore] disabled = f... See more...
Maybe not a common TA or app, but Splunk App for Stream uses kvstore. Found this out recently doing some troubleshooting. So on stream servers make sure in server.conf to set [kvstore] disabled = false
Hello All I was able to solve this issue, I was digging on cURL capabilities and the answer is cURL -K configFile. Below is how it works: First suppose you require to send an extremely long quer... See more...
Hello All I was able to solve this issue, I was digging on cURL capabilities and the answer is cURL -K configFile. Below is how it works: First suppose you require to send an extremely long query to Splunk API from your app or script with your cURL command (SPL search command in my case 121852 chars) 1. curl command curl -K query.spl --noproxy '*' -H "Authorization: Splunk myTOKEN" https://mySearchHEAD:8089/servicesNS/admin/search/search/jobs  ### --noproxy '*' it is optional and depends on your network setup 2. Your config file query.spl content and synaxis  [someUser@algunServidor:~/myDirectorio]$ more query.spl -d exec_mode=oneshot   ## this can be normal -d output_mode=json       ## this can be xml or csv -d "search=| search index=myIndex sourcetype=mySourcetype _raw=*somethingIamLooking for* field1=something1 field2=something2 .... fieldN=somethingN earliest=-1h latest=now"   ### really important to pay attention to the quotes in red above you need them to make it work.  I hope this help someone    
Hi, is there a query to list all the queries that time out in Splunk Cloud? Thank you  Kind regards Marta
Change the timerange of both panel to some historical time for which your summary index will have data. E.g. earliest=-24h latest=-2m@m. This way your summary will have some data summarized and your ... See more...
Change the timerange of both panel to some historical time for which your summary index will have data. E.g. earliest=-24h latest=-2m@m. This way your summary will have some data summarized and your drilldown search will only look at raw data for summarized data time-range only.
Thank you for the suggestion. Best regards Marta
Hi, This forum is specific to "Splunk Observability Cloud" which includes products like APM, RUM, IM, Synthetics... You'll get better replies if you post your question to the "Splunk Search" section... See more...
Hi, This forum is specific to "Splunk Observability Cloud" which includes products like APM, RUM, IM, Synthetics... You'll get better replies if you post your question to the "Splunk Search" section (https://community.splunk.com/t5/Splunk-Search/bd-p/splunk-search)
Give this a try Find count of events by userAgent Your base search | rex "\]\s+(\"+[^\"]+){3}\"+\s+\"+(?<userAgent>[^\"]+)" | stats count by userAgent    Trend of distinct count of userAgents ove... See more...
Give this a try Find count of events by userAgent Your base search | rex "\]\s+(\"+[^\"]+){3}\"+\s+\"+(?<userAgent>[^\"]+)" | stats count by userAgent    Trend of distinct count of userAgents over time Your base search | rex "\]\s+(\"+[^\"]+){3}\"+\s+\"+(?<userAgent>[^\"]+)" | timechart dc(userAgent) as distinct_userAgents
Hi, is there a query to list all the queries that time out in Splunk Cloud? Thank you  Kind regards Marta  
Hi @Adpafer , you can compare one high performance (48CPU, 128GB) indexer with two medium size (each 24CPU, 64GB) indexers without cluster, and the performaces are more or less the same. You cannot... See more...
Hi @Adpafer , you can compare one high performance (48CPU, 128GB) indexer with two medium size (each 24CPU, 64GB) indexers without cluster, and the performaces are more or less the same. You cannot compare one indexer with two clustered indexers because, in the second case, you have also High Availability performances that obviously you don't have with one server. Anyway, I always hint more systems than one, even if not clustered. Obviously clustered is better! Ciao. Giuseppe
There is no way to link to a specific event, but you can link to a search that returns that event.
Is there away to point to an existing event in Splunk using a URI link like https://mysplunk.mycompany.com/....
It depends.  Faster at what?  Assuming the same CPU in both cases, both will run at the same speed. When you have more than one indexer with data distributed among them, searches will be faster becau... See more...
It depends.  Faster at what?  Assuming the same CPU in both cases, both will run at the same speed. When you have more than one indexer with data distributed among them, searches will be faster because of parallel processing.  This benefit works with both clustered and independent indexers. Clustering of indexers is for data protection/redundancy rather than for speed. There no doubt is some overhead in a cluster for replicating data, but I've not seen any data about how much that affects performance.
Hello, I would like to know if it is possible to add an hyperlink in a table cell/column. I have a column title Link and the values are URLs, so I would like them to be clickable as a link. I kn... See more...
Hello, I would like to know if it is possible to add an hyperlink in a table cell/column. I have a column title Link and the values are URLs, so I would like them to be clickable as a link. I know this can be achieved by editing XML source, but this is not possible in Dashboard Studio, right? Please let me know if there is a way to do this. Many thanks.
Hello All, Can we implement time series analysis and anomaly detection in Splunk by using the approach of Matrix Profile? If yes, can you please suggest an approach considering we need to fetch Euc... See more...
Hello All, Can we implement time series analysis and anomaly detection in Splunk by using the approach of Matrix Profile? If yes, can you please suggest an approach considering we need to fetch Euclidean distance of multiple sub-sequences for a given timeseries data and then make decisions. Thank you Taruchit 
Hi Everybody, Could you pls explain which works faster - to have 1 indexer server, high performance (48CPU, 128GB) or 2 indexers in cluster (each 24CPU, 64GB)?   thanks for reply.   regards, paw... See more...
Hi Everybody, Could you pls explain which works faster - to have 1 indexer server, high performance (48CPU, 128GB) or 2 indexers in cluster (each 24CPU, 64GB)?   thanks for reply.   regards, pawelF
This works with the sample event   <<your query>> | rex "https?\S+\s\\\"+(?<UA>[^\\\"]+)" | stats count by UA  
Hi @Lavender, you have to use the same field name to correlate the two searches, something like this: index=xyz component=gateway appid=12345 message="*|osv|*" | rex "trace-id.(?<RequestID>\d+)" | ... See more...
Hi @Lavender, you have to use the same field name to correlate the two searches, something like this: index=xyz component=gateway appid=12345 message="*|osv|*" | rex "trace-id.(?<RequestID>\d+)" | search RequestID=* | table _time,Country,Environment,appID,LogMessage | append [search index=xyz appid=12345 message="*|osv|*" level="error" `mymacrocompo` | rex "trace-id.(?<RequestID>\d+)" | search RequestID=* | table RequestID LogMessage1 ] | stats earliest(_time) AS _time values(Country) AS Country values(Environment) AS Environment values(appID) AS appID values(LogMessage) AS LogMessage values(eval(if(level="error",LogMessage1, "NULL"))) AS Errorlogs BY RequestID Ciao. Giuseppe
Use the same field name in the main search and in the appended search then use the stats command to join the results on common RequestID values.   index=xyz |search component=gateway appid=12345 me... See more...
Use the same field name in the main search and in the appended search then use the stats command to join the results on common RequestID values.   index=xyz |search component=gateway appid=12345 message="*|osv|*" |rex "trace-id.(?<RequestID>\d+)" |fillnull value=NULL RequestID |search RequestID!=NULL |table _time,Country,Environment,appID,LogMessage |append [search index=xyz |search appid=12345 message="*|osv|*" level="error" |search `mymacrocompo` |rex "trace-id.(?<RequestID>\d+)" |fillnull value=NULL RequestID |search RequestID!=NULL |table LogMessage1] | stats values(*) as * by RequestID   The reason why |eval Errorlogs=if(RequestID=RequestID1,"LogMessage1", "NULL") does work is because the append command puts the events with RequestID1 on different "rows" than events with RequestID.  Since SPL looks at one event (row) at a time, no event will have both RequestID and RequestID1.
Hi all, I have custom summary index, which is having the required fields from many indexes in order to make a dashboard. The problem is when P2 in first panel shows a count of 36, we have a drilldo... See more...
Hi all, I have custom summary index, which is having the required fields from many indexes in order to make a dashboard. The problem is when P2 in first panel shows a count of 36, we have a drilldown to these numbers so that, we can check more details to it, at that time, count mismatches, because, the custom summary index gets refreshed in 2mins, and dashboard takes time to load. Please let me know, how to fix this so that, in drilldown panel upon load, count should match the first panel.
Hi, I have same field that value has to compared between 2 search queries. So, Kindly help on below.   index=xyz |search component=gateway appid=12345 message="*|osv|*" |rex "trace-id.(?<Req... See more...
Hi, I have same field that value has to compared between 2 search queries. So, Kindly help on below.   index=xyz |search component=gateway appid=12345 message="*|osv|*" |rex "trace-id.(?<RequestID>\d+)" |fillnull value=NULL RequestID |search RequestID!=NULL |table _time,Country,Environment,appID,LogMessage |append [search index=xyz |search appid=12345 message="*|osv|*"  level="error" |search `mymacrocompo`  |rex "trace-id.(?<RequestID1>\d+)" |fillnull value=NULL RequestID1 |search RequestID1!=NULL |table LogMessage1] |eval Errorlogs=if(RequestID=RequestID1,"LogMessage1", "NULL") In the above query, we have RequestID in the main query and the sub query as well. we have to find out the error logs based on RequestID which means if RequestID matches with RequestID1, need to dispaly the LogMessage1.