All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Give this a try Find count of events by userAgent Your base search | rex "\]\s+(\"+[^\"]+){3}\"+\s+\"+(?<userAgent>[^\"]+)" | stats count by userAgent    Trend of distinct count of userAgents ove... See more...
Give this a try Find count of events by userAgent Your base search | rex "\]\s+(\"+[^\"]+){3}\"+\s+\"+(?<userAgent>[^\"]+)" | stats count by userAgent    Trend of distinct count of userAgents over time Your base search | rex "\]\s+(\"+[^\"]+){3}\"+\s+\"+(?<userAgent>[^\"]+)" | timechart dc(userAgent) as distinct_userAgents
Hi, is there a query to list all the queries that time out in Splunk Cloud? Thank you  Kind regards Marta  
Hi @Adpafer , you can compare one high performance (48CPU, 128GB) indexer with two medium size (each 24CPU, 64GB) indexers without cluster, and the performaces are more or less the same. You cannot... See more...
Hi @Adpafer , you can compare one high performance (48CPU, 128GB) indexer with two medium size (each 24CPU, 64GB) indexers without cluster, and the performaces are more or less the same. You cannot compare one indexer with two clustered indexers because, in the second case, you have also High Availability performances that obviously you don't have with one server. Anyway, I always hint more systems than one, even if not clustered. Obviously clustered is better! Ciao. Giuseppe
There is no way to link to a specific event, but you can link to a search that returns that event.
Is there away to point to an existing event in Splunk using a URI link like https://mysplunk.mycompany.com/....
It depends.  Faster at what?  Assuming the same CPU in both cases, both will run at the same speed. When you have more than one indexer with data distributed among them, searches will be faster becau... See more...
It depends.  Faster at what?  Assuming the same CPU in both cases, both will run at the same speed. When you have more than one indexer with data distributed among them, searches will be faster because of parallel processing.  This benefit works with both clustered and independent indexers. Clustering of indexers is for data protection/redundancy rather than for speed. There no doubt is some overhead in a cluster for replicating data, but I've not seen any data about how much that affects performance.
Hello, I would like to know if it is possible to add an hyperlink in a table cell/column. I have a column title Link and the values are URLs, so I would like them to be clickable as a link. I kn... See more...
Hello, I would like to know if it is possible to add an hyperlink in a table cell/column. I have a column title Link and the values are URLs, so I would like them to be clickable as a link. I know this can be achieved by editing XML source, but this is not possible in Dashboard Studio, right? Please let me know if there is a way to do this. Many thanks.
Hello All, Can we implement time series analysis and anomaly detection in Splunk by using the approach of Matrix Profile? If yes, can you please suggest an approach considering we need to fetch Euc... See more...
Hello All, Can we implement time series analysis and anomaly detection in Splunk by using the approach of Matrix Profile? If yes, can you please suggest an approach considering we need to fetch Euclidean distance of multiple sub-sequences for a given timeseries data and then make decisions. Thank you Taruchit 
Hi Everybody, Could you pls explain which works faster - to have 1 indexer server, high performance (48CPU, 128GB) or 2 indexers in cluster (each 24CPU, 64GB)?   thanks for reply.   regards, paw... See more...
Hi Everybody, Could you pls explain which works faster - to have 1 indexer server, high performance (48CPU, 128GB) or 2 indexers in cluster (each 24CPU, 64GB)?   thanks for reply.   regards, pawelF
This works with the sample event   <<your query>> | rex "https?\S+\s\\\"+(?<UA>[^\\\"]+)" | stats count by UA  
Hi @Lavender, you have to use the same field name to correlate the two searches, something like this: index=xyz component=gateway appid=12345 message="*|osv|*" | rex "trace-id.(?<RequestID>\d+)" | ... See more...
Hi @Lavender, you have to use the same field name to correlate the two searches, something like this: index=xyz component=gateway appid=12345 message="*|osv|*" | rex "trace-id.(?<RequestID>\d+)" | search RequestID=* | table _time,Country,Environment,appID,LogMessage | append [search index=xyz appid=12345 message="*|osv|*" level="error" `mymacrocompo` | rex "trace-id.(?<RequestID>\d+)" | search RequestID=* | table RequestID LogMessage1 ] | stats earliest(_time) AS _time values(Country) AS Country values(Environment) AS Environment values(appID) AS appID values(LogMessage) AS LogMessage values(eval(if(level="error",LogMessage1, "NULL"))) AS Errorlogs BY RequestID Ciao. Giuseppe
Use the same field name in the main search and in the appended search then use the stats command to join the results on common RequestID values.   index=xyz |search component=gateway appid=12345 me... See more...
Use the same field name in the main search and in the appended search then use the stats command to join the results on common RequestID values.   index=xyz |search component=gateway appid=12345 message="*|osv|*" |rex "trace-id.(?<RequestID>\d+)" |fillnull value=NULL RequestID |search RequestID!=NULL |table _time,Country,Environment,appID,LogMessage |append [search index=xyz |search appid=12345 message="*|osv|*" level="error" |search `mymacrocompo` |rex "trace-id.(?<RequestID>\d+)" |fillnull value=NULL RequestID |search RequestID!=NULL |table LogMessage1] | stats values(*) as * by RequestID   The reason why |eval Errorlogs=if(RequestID=RequestID1,"LogMessage1", "NULL") does work is because the append command puts the events with RequestID1 on different "rows" than events with RequestID.  Since SPL looks at one event (row) at a time, no event will have both RequestID and RequestID1.
Hi all, I have custom summary index, which is having the required fields from many indexes in order to make a dashboard. The problem is when P2 in first panel shows a count of 36, we have a drilldo... See more...
Hi all, I have custom summary index, which is having the required fields from many indexes in order to make a dashboard. The problem is when P2 in first panel shows a count of 36, we have a drilldown to these numbers so that, we can check more details to it, at that time, count mismatches, because, the custom summary index gets refreshed in 2mins, and dashboard takes time to load. Please let me know, how to fix this so that, in drilldown panel upon load, count should match the first panel.
Hi, I have same field that value has to compared between 2 search queries. So, Kindly help on below.   index=xyz |search component=gateway appid=12345 message="*|osv|*" |rex "trace-id.(?<Req... See more...
Hi, I have same field that value has to compared between 2 search queries. So, Kindly help on below.   index=xyz |search component=gateway appid=12345 message="*|osv|*" |rex "trace-id.(?<RequestID>\d+)" |fillnull value=NULL RequestID |search RequestID!=NULL |table _time,Country,Environment,appID,LogMessage |append [search index=xyz |search appid=12345 message="*|osv|*"  level="error" |search `mymacrocompo`  |rex "trace-id.(?<RequestID1>\d+)" |fillnull value=NULL RequestID1 |search RequestID1!=NULL |table LogMessage1] |eval Errorlogs=if(RequestID=RequestID1,"LogMessage1", "NULL") In the above query, we have RequestID in the main query and the sub query as well. we have to find out the error logs based on RequestID which means if RequestID matches with RequestID1, need to dispaly the LogMessage1.  
I tried to use  sourcetype=<sourcetypename> |rex field=_raw "(?<TLD>\.\w+?)(?:$|\/)" | table TLD It returned TLDs but included values I think maybe part of IPs e.g. .33, .136, .74 etc. 
Hi @ITWhisperer  Thank you very much for your prompt help. Thank you
I need a query that extracts TLDs from events and compares the results with a lookup table with blocklisted TLDs
Hi If you have some fields with and without the ., below is an example of how to get that to work. However it only works going into an event index, it does not seem to work going into metrices. [t... See more...
Hi If you have some fields with and without the ., below is an example of how to get that to work. However it only works going into an event index, it does not seem to work going into metrices. [test_abc_transforms] CLEAN_KEYS = false DELIMS=, FIELDS=degraded.threshold,down.threshold [drop_header] REGEX = metric_timestamp,metric_name,_value,degraded\.threshold,down\.threshold DEST_KEY = queue FORMAT = nullQueue     metric_timestamp,metric_name,_value,degraded.threshold,down.threshold 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300 1695201472,mx.process.cpu.utilization,1.373348018,30,300  
| streamstats count as row | eventstats max(row) as total | eval cuttoff=total/10 | where row <= cuttoff
Hello To find and fix CSV header errors in multiple files, write a script to check for duplicate column names and invalid fields in the header row. Then, run the script on your CSV file directory. ... See more...
Hello To find and fix CSV header errors in multiple files, write a script to check for duplicate column names and invalid fields in the header row. Then, run the script on your CSV file directory. For Python, a basic example might look like this:   import csv import os def check_csv_headers(file_path): with open(file_path, 'r') as csvfile: csvreader = csv.DictReader(csvfile) headers = csvreader.fieldnames if len(headers) != len(set(headers)): print(f"Duplicate columns in: {file_path}") if '' in headers: print(f"Invalid field name in: {file_path}") # Directory containing CSV files directory = '/path/to/csv/files' for filename in os.listdir(directory): if filename.endswith('.csv'): file_path = os.path.join(directory, filename) check_csv_headers(file_path) Save the script to a file, make it executable (if needed), and run it against your directory containing the CSV files. python check_csv_headers.py This approach automates the process of scanning your CSV files for errors and should help you efficiently locate and fix these issues across multiple files within your Splunk Heavy Forwarder. You can also check https://community.splunk.com/t5/Knowledge-Management/bd-p/knowledge-management/CCSP Certification Thank you