All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good morning, I am looking on generating a search to find the 1% slowest requests from IIS logs however I am not sure if this is possible, just wondered if anyone has done something similar before? ... See more...
Good morning, I am looking on generating a search to find the 1% slowest requests from IIS logs however I am not sure if this is possible, just wondered if anyone has done something similar before?   Thanks   Joe
High CPU utilization observed for splunkd and python3.7 processes on Splunk HF after Splunk Enterprise upgrade from 7.x to 8.1.4 version. Any help would be appreciated. Tq.
Hi,  First off, apologies if this is the wrong forum to post this but I am stuck and need help. I currently have a test environment set up as below. Symantec SEPM is sending syslog to a vip load b... See more...
Hi,  First off, apologies if this is the wrong forum to post this but I am stuck and need help. I currently have a test environment set up as below. Symantec SEPM is sending syslog to a vip load balancer which will then forward to either one of two HF.  Flow is as follows: Symantec SEPM > LB > HF   Configuration as shown below: Symantec SEPM version 14.3 RU1 with the following syslog configuration Syslog IP:  VIP of Load Balancer Syslog dest port: TCP 514 Syslog Line Separator: LF   LB is configured to forward the logs to HF via port 9997   Issue: Currently, the issue is that the risk logs used to be sending over previously but seem to stop now.   If I have missed out anything, please let me know.   Any feedback is greatly appreciated.    Regards, Mikhael
Hey guys quick question.  is it possible for me to use the rest api to do a search query, but not receive a stream of objects, but rather, batch the results into an array of objects? query: ser... See more...
Hey guys quick question.  is it possible for me to use the rest api to do a search query, but not receive a stream of objects, but rather, batch the results into an array of objects? query: services/search/jobs/export?output\_mode=json&search=savedsearch infraportal\_login\_history user=mrodrig1&summarize=true Currently receiving a stream of objects like this:  {"preview":false,"offset":0,"result":{"Computer_Name":"computername","_time":"2021-07-14 13:01:07.000 EDT"}} {"preview":false,"offset":1,"result":{"Computer_Name":"computername","_time":"2021-07-14 17:01:08.000 EDT"}} {"preview":false,"offset":2,"lastrow":true,"result":{"Computer_Name":"computername","_time":"2021-07-15 16:01:08.000 EDT"}} would prefer to have something like [{result},{result2}]
I have a large Splunk & ES environment and use DMC daily. Are there a series of SPL that would help me perform such tasks. Thank u in advance.
Hi, I noticed that the dashboard studio supports a customized text any where on the dashboard as showed below the "view all" .  However, if I choose to use dashboard studio, then I assume I lost ... See more...
Hi, I noticed that the dashboard studio supports a customized text any where on the dashboard as showed below the "view all" .  However, if I choose to use dashboard studio, then I assume I lost capability to modify the simple XML code or use JS scripts on that dashboard. Is it possible to have such a drill-down text on the dashboard not using dashboard studio? Or if it's possible to modify the dashboard studio code to achieve more customized functionalities like those provided by XML?  Thank you!
I have configured stream addon on UF and specified the location of stream app on SH, as per the docs. On tcpdump, I can see traffic going back and forth between SH and UF, however, I dont see the UF... See more...
I have configured stream addon on UF and specified the location of stream app on SH, as per the docs. On tcpdump, I can see traffic going back and forth between SH and UF, however, I dont see the UF coming up as a forwarder on the Stream app under the "distributed forwarder management" page. In docs, there is no mention of how to troubleshoot this type of connection. Can somebody please help with this ?
What is a proper way of upgrading over 70 Apps and Add-ons to the new versions in Splunk Enterprise and ES?
Is there a way to configure the management port, which is being used to access the REST API, to use the TLS certs we have from DigiCert?   I have server.conf set up with serverCert pointing to the ... See more...
Is there a way to configure the management port, which is being used to access the REST API, to use the TLS certs we have from DigiCert?   I have server.conf set up with serverCert pointing to the location of that file.  However, when I check using openssl on that port we get the certificate that ships with Splunk despite setting up server.conf to use the certificate from DigiCert.   Note, web access does correctly use the DigiCert certificate.  It's just access over the managment port doesn't.
I have a few reports created in the Splunk Ent. & would like to clone them to ES so they don't have to be re-created. Thank u
Hello, Please let me know how I would write Props Configuration file for this csv file. Segment of sample data for this csv file is given below. Any help will be highly appreciated, thank you!   ... See more...
Hello, Please let me know how I would write Props Configuration file for this csv file. Segment of sample data for this csv file is given below. Any help will be highly appreciated, thank you!        
I need to do an analysis on API calls using logs, like avg, min, max, percentile99, percentil95, percentile99 response time, and also hits per second. So, if I have events like below : /data/user... See more...
I need to do an analysis on API calls using logs, like avg, min, max, percentile99, percentil95, percentile99 response time, and also hits per second. So, if I have events like below : /data/users/1443 | 0.5 sec /data/users/2232 | 0.2 sec /data/users/39 | 0.2 sec Expectation: I want them to be grouped like below, as per their API pattern : proxy max_response_time /data/users/{id} | 0.5 sec   These path variables (like {id}) can be numerical or can be a string with special characters I have about 3000 such API patterns which have path variables in them,  they can be categorized into 3 types, those that have a path variable only at the end, those that have 1 or more path variables only in the middle, and those that have 1 or more path variables in the middle as well as in the end. Note: there are no arguments after the API i.e. like /data/view/{name}/pagecount?age=x. There will be just the URI part proxy method request_time /data/users/{id} POST 0.046 /server/healthcheck/check/up GET 0.001 /data/commons/people/multi_upsert POST 0.141 /store/org/manufacturing/multi_read POST 0.363 /data/users/{id}/homepage/{name} POST 0.084 /data/view/{name}/pagecount PUT 0.043 Category 1 (path variable only at the end) : /data/users/{id} POST 0.046 Category 2 (1 or more path variables only in the middle) : /data/view/{name}/pagecount PUT 0.043 /data/view/{name}/details/{type}/pagecount PUT 0.043 Category 3 (1 or more path variables only in the middle and also at the end) : /data/users/{id}/homepage/{name} POST 0.084 /data/users/{id}/homepage/{type}/details/{name} POST 0.084   Current Query :     index="*myindex*" host="*abc*" host!=*ftp* sourcetype!=infra* sourcetype!=linux* sourcetype = "nginx:plus:access" | bucket span=1s _time| stats count by env,tenant,uri_path,request_method,_time       I need the uri_path to be grouped as per the API patterns I have.    1 option is to add 3000 regex replace statements, like the one blow, in the query for each API pattern, but that makes query too heavy to parse, I tried something like this, for a sample pattern /api/data/users/{id} :   |rex mode=sed field=uri_path "s/\/api\/data\/users\/([^\/]+)$/\/api\/data\/users\/{id}/g"    
How do I search for a complete list of all the Apps on my Deployment server ? If possible Excluding the Built In apps? Thank u in advance
Hi here is my log: 2020-01-19 13:20:15,093 INFO ABC.InEE-Product-00000 [MyProcessor] Detail Packet: M[000] T[111] P[0A0000] AT[00] R[0000] TA[ABC.OutEE-Product] Status[OUT-LOGOUT,EXIT] 2020-01-19 ... See more...
Hi here is my log: 2020-01-19 13:20:15,093 INFO ABC.InEE-Product-00000 [MyProcessor] Detail Packet: M[000] T[111] P[0A0000] AT[00] R[0000] TA[ABC.OutEE-Product] Status[OUT-LOGOUT,EXIT] 2020-01-19 13:36:08,185 INFO ABC.InEP-Product-00000 [MyProcessor] Detail Packet Lost: M[000] T[111] SA[ABC.InEE-Product]  R[0000]   what is the rex for SOURCE=ABC.InEE-Product TARGET=ABC.OutEE-Product Model=000 Tip=111 POD=0A0000   any idea? Thanks,
If I run this search I generate two numeric fields, one called number the other called decimal     | makeresults 1 | eval number = 7 | eval decimal = 7.0     When I choose to export this data... See more...
If I run this search I generate two numeric fields, one called number the other called decimal     | makeresults 1 | eval number = 7 | eval decimal = 7.0     When I choose to export this data as CSV there are quotes around decimal but not around number.  Is it possible to ensure that neither field has quotes when the CSV is downloaded?  
I have created a custom business transaction in one of my applications. Now I want to move those business transactions to another application (both are the same code base but different environment). ... See more...
I have created a custom business transaction in one of my applications. Now I want to move those business transactions to another application (both are the same code base but different environment). I tried the application import/export option but I have to do lots of changes. Is there any other way to move the custom business transactions?
An analyst adds a note to investigation. Another analyst from another shift delete this note. where is the audit trail that allows me to see when and who did what in an investigation ? According to... See more...
An analyst adds a note to investigation. Another analyst from another shift delete this note. where is the audit trail that allows me to see when and who did what in an investigation ? According to the doc : "Investigation details from investigations created in versions earlier than 4.6.0 of Splunk Enterprise Security are stored in two KV Store collections, investigative_canvas and investigative_canvas_entries. Those collections are preserved in version 4.6.0 but the contents are added to the new investigation KV Store collections. So to restore, you may need to restore investigation, investigation_attachment, investigation_event, investigation_lead, investigative_canvas, and investigative_canvas_leads." But except for the investigation KV store (| rest /services/storage/investigation/investigation) I can't access the other KV store . Is it a missing functionality ?   Thanks !      
Hello!  I'm trying to set an alert that let's me know if tasks in a specific queue pass a specific duration.  The search has been giving me issues.  I tried a transaction line, but I don't have a end... See more...
Hello!  I'm trying to set an alert that let's me know if tasks in a specific queue pass a specific duration.  The search has been giving me issues.  I tried a transaction line, but I don't have a endswith.  Does anyone know how to run a search like this? I'm trying something like: earliest=-30d@d index=[DATA] sourcetype=incident_history incident_type=[SPECIFIC QUEUE] event_type=[SPECIFIC ACTION (LIKE A TASK ON HOLD)] | transaction incident_id when startswith=[SPECIFIC ACTION (LIKE A TASK ON HOLD)] endswith= > 72h | table incident_id, duration | sort - duration It's not a transaction, but the only thing I could thing of.  What would be a search command forwhen an incident_id has been in a specific queue past a specific duration? Any help would be appreciated.
This article states how to change the TTL for a saved search individually: https://docs.splunk.com/Documentation/SplunkCloud/8.2.2105/Search/Extendjoblifetimes I want to change the default TTL of any... See more...
This article states how to change the TTL for a saved search individually: https://docs.splunk.com/Documentation/SplunkCloud/8.2.2105/Search/Extendjoblifetimes I want to change the default TTL of any and all saved searches. Otherwise, I and my team have to remember to change this for each new search we save.
Hi Here is my log, what is the rex for extract "0000A0@#0000" and "mymodulename"   2021-07-14 23:59:05,185 INFO [APP] User: 0000A0@#0000 || module: mymodulename   any idea? Thanks