All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I have 2 queries, from one i get a list of files and the other query should use these files as their source to get some results. The output of first queries may have a lot files and i want ... See more...
Hi all, I have 2 queries, from one i get a list of files and the other query should use these files as their source to get some results. The output of first queries may have a lot files and i want to use all of them together in the second query. Does anyone have idea of how to do this one?
Hi Splunkers, i'm trying to build a most common search, wich is: track when a WIndows/Active Directory account is changed from disabled to enabled. The starting point, the switching to enable, is... See more...
Hi Splunkers, i'm trying to build a most common search, wich is: track when a WIndows/Active Directory account is changed from disabled to enabled. The starting point, the switching to enable, is not a problem for me; this because I know that tracking the EventCode=4722 help me with this scenario. The "but" here is the following: the customer want to be able to distinguish the legit changes from not legit ones. Here there are two tipical scenario, one admitted and one not: New User When a new user arrives in the company and their user account is created, Active Directory first generates an account creation event and then generates another user account enable event. This case should not be alerted because it is a normal process. User disabled in the company When a user left the company some time ago and his user account changes the status to Enabled it is an Abnormal Event, so it should be alerted. So, my question is: how can I identify the legit situation from the not legit one?  
hello as you can see i stats events following the bin time value But when the bin time value is equal to 0, I have nothing displayed I would like to display the results even if the result is 0 ... See more...
hello as you can see i stats events following the bin time value But when the bin time value is equal to 0, I have nothing displayed I would like to display the results even if the result is 0 but just for hour corresponding to the current hour or to the previous hour It means that I dont want to display 0 for a bin time which is later than the current hour     index=toto sourcetype=titi | bin span=1h _time | eval time = strftime(_time, "%H:%M") | stats count as Pb by s time | search Pb >= 3 | stats dc(s) as nbs by time | rename time as Heure     I tried like this but it doesnt works     | appendpipe [ stats count as _events | where _events = 0 | eval nbs = 0 ]      could you help please?
Hey, Is it possible to export every field from a Splunk Search via a Dashboard? Thanks, Patrick
Hi, If everything (for example, frontend Web application, backend microservices, databases...) is running in a Kubernetes cluster and deployed appdynamics-operator, appdynamics-cluster-agent, serv... See more...
Hi, If everything (for example, frontend Web application, backend microservices, databases...) is running in a Kubernetes cluster and deployed appdynamics-operator, appdynamics-cluster-agent, serverviz-machine-agent in the same Kubernetes cluster. I want to know is it possible to monitor applications running inside the Kubernetes cluster without installing application agent?  log example of frontend web application. It contains response status, request time, upstream response time and so on. 198.13.6.0 - - [10/Mar/2022:06:47:31 +0000] "POST /test/api/v2/um/inviteUser?emailAdd=qiqguo@cisco.com HTTP/1.1" 200 40 "http://192.168.0.10/spade/user-dashboard/admin/invite-user" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:97.0) Gecko/20100101 Firefox/97.0" 711 1.948 [test-ns-web-app-9080] [] 198.13.5.62:9080 40 1.948 200 f7cfa3b67e132c961c7a02c1a7445145 198.13.6.0 - - [10/Mar/2022:06:47:31 +0000] "POST test/api/v3/login/validClaim HTTP/1.1" 200 4 "http://10.75.189.81/spade/user-dashboard/admin/invite-user" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:97.0) Gecko/20100101 Firefox/97.0" 937 0.004 [test-ns-web-app-9080] [] 198.13.5.62:9080 4 0.003 200 065863b5bdd28a6f0db5e061cd4944af
I'm getting logs from a dockerized in-house developed application and ingesting them into Splunk. There are 3 types of logs, coming into the log file: 1. Application logs (single line, internal f... See more...
I'm getting logs from a dockerized in-house developed application and ingesting them into Splunk. There are 3 types of logs, coming into the log file: 1. Application logs (single line, internal format) 2. UWSGI logs (multiline) 3. ModSecurity serial logging (multiline) The logs are forwarded to remote syslog server, and then ingested into Splunk with universal forwarder. While those logs are in different formats I want to separate them into different indexes for different processing approaches. Is there any good documentation piece/forum post/tutorial/anything that describes effective way to separate different log types from a mixed source? Thank you!
Hi what is the recommended way to index massage trace logs ?   currently we are using  Microsoft Office 365 Reporting Mail Add-on for Splunk   also in the older version of Splunk Add-on for... See more...
Hi what is the recommended way to index massage trace logs ?   currently we are using  Microsoft Office 365 Reporting Mail Add-on for Splunk   also in the older version of Splunk Add-on for Microsoft Office 365 we had ServiceStatus , ServiceMessage and I don't see it in the latest release 
Hi, I am unable to create a timechart for specific field value aggregations. I have one field with 4 possible values. One timechart needs to be the total number across all 4 values and the second t... See more...
Hi, I am unable to create a timechart for specific field value aggregations. I have one field with 4 possible values. One timechart needs to be the total number across all 4 values and the second timechart meeds to be the total over 2 field values. The only thing on the legend should be TOTAL from the timechart. Here is what my timechart and XML code currently looks like: And Can you please help? Thanks, Patrick
Gentlemen, How can i use eval  to assign a field  values of 2 different fields ? In my events, i have 2 fields:  empID and Non-empID .  I want eval to create a new field called identity  and this... See more...
Gentlemen, How can i use eval  to assign a field  values of 2 different fields ? In my events, i have 2 fields:  empID and Non-empID .  I want eval to create a new field called identity  and this should have the value of either empID OR Non-empID whichever is present .  Hope i am clear I tried       eval identity = coalesce (empID , Non-empID )        but this didn't work. Any  suggestions ?  Any other way to get this done if eval doesn't do it ? Eventually i am going to have a table as follows, and the Identity column should consolidate empID / Non-empID whichever is present for that employee record. identity First Last email Emp ID       Non-Emp ID        
Is it possible to configure  HF's inputs.conf for data transferring from UF? Please provide some Splunk documentation
Why can't the free trial Splunk Cloud accounts not access the REST API?
Hi  I have a requirement like on like on the certain appName in below table the drilldown panel should show some message like "as it is lite application it is not applicable" instead of "No results... See more...
Hi  I have a requirement like on like on the certain appName in below table the drilldown panel should show some message like "as it is lite application it is not applicable" instead of "No results" lite apps are: FGS,CMS,abd Onclick of success/failure count for the lite app the drilldown panel should show the above message appName success failure abd 1 12 profile 5 13 FGS 7 14 DSM 9 15 CMS 3 12   Thanks in advance
ダッシュボードスタジオで出力ルックアップを実行すると、セキュリティエラーが発生します。これを許容する方法を教えてください。
i have the following in a statistical table on a dashboard index=* <do search> | dedup B C | table _time B C D E F J | sort-_time I would like to have a count at the end of each row telling how m... See more...
i have the following in a statistical table on a dashboard index=* <do search> | dedup B C | table _time B C D E F J | sort-_time I would like to have a count at the end of each row telling how many it deduped.
I am looking to export the results of a Splunk search that contains transforming commands.  When I run the same search in the web GUI the live results "hang" on 50,000 stats, but once the search is c... See more...
I am looking to export the results of a Splunk search that contains transforming commands.  When I run the same search in the web GUI the live results "hang" on 50,000 stats, but once the search is complete it shows more than 300,000.  (screenshots provided below) Using the Splunk API, I want to export all results in a .json format, and I only want to view the final results; I do not want to view the results as they are streamed In essence I want to avoid the API returning any row where: "preview":true What am I missing?   While performing search Finished results                 Using python 3.9's  requests, my script contains the following:       headers={'Authorization': 'Splunk %s' % sessionKey} parameters={'exec_mode': "oneshot", 'output_mode':output_type, 'adhoc_search_level':'fast', 'count':0} with post(url=baseurl + '/services/search/jobs/export',params=parameters, data=({'search': search_query}), timeout=60, headers=headers, verify=False, stream=True) as response:      
I am indexing email data that Splunk reads from an inbox folder (via TA-mailclient). Those emails contain a csv file that comes as file attachment to the email.  Below is an example where the field... See more...
I am indexing email data that Splunk reads from an inbox folder (via TA-mailclient). Those emails contain a csv file that comes as file attachment to the email.  Below is an example where the field name of the attachment is file_content and the field value is below:     Stopped by Reputation Filtering,Stopped as Invalid Recipients,Spam Detected,Virus Detected,Stopped by Content Filter,Total Threat Messages,Clean Messages,Total Attempted Messages 9.28068485506,0.0,45.1350500141,0.00114191624597,1.53311465023,55.9499914356,44.0500085644,-- 251946,0,1225297,31,41620,1518894,1195841,2714735       I want to be able to manipulate the results to look like below: Stopped by Reputation Filtering Stopped as Invalid Recipients Spam Detected Virus Detected Stopped by Content Filter Total Threat Messages Clean Messages Total Attempted Messages 9.280684855 0 45.13505001 0.001141916 1.53311465 55.94999144 44.05000856 -- 251946 0 1225297 31 41620 1518894 1195841 2714735   Can someone please advise how to achieve this ?
I have the following log that Splunk is not recognizing well : msg=id=123342521352 operation=write   How can I write a query so that ID is parsed by itself? The goal is to be able to build a ... See more...
I have the following log that Splunk is not recognizing well : msg=id=123342521352 operation=write   How can I write a query so that ID is parsed by itself? The goal is to be able to build a table like : id operation 123342521352 write   Unfortunately, when I tried table id operation The id is always empty as it does not seem to be parsed correctly
Hi, Long time reader, first time poster.  I've cobbled together this query that generates a count by status for last week, and the week before, I would like to add a PercentageChange Column. ... See more...
Hi, Long time reader, first time poster.  I've cobbled together this query that generates a count by status for last week, and the week before, I would like to add a PercentageChange Column.   index="my_index" container_label=my_notables container_update_time!=null earliest=-14d@w0 latest=@w0 | fields id, status, container_update_time | eval Time=strftime(_time,"%m/%d/%Y %l:%M:%S %p") | eval container_update_time_epoch = strptime(container_update_time, "%FT%T.%5N%Z") | sort 0 -container_update_time | dedup id | eval status=case((status="dismissed"), "Dismissed (FP)",(status="resolved"), "Resolved (TP)",true(), "Other") | eval marker=if(relative_time(now(),"-7d@w0")<container_update_time_epoch,"WeekReporting", "PriorWeek") | eval _time=if(relative_time(now(),"-7d@w0")<container_update_time,container_update_time_epoch, container_update_time_epoch+60*60*24*7) | chart count by status marker I know I need to incorporate the following eval somehow, just not sure how to tie it all together to get it to show up in the format shown above. | eval PercentChange= if(PriorWeek!=0,(WeekReporting-PriorWeek)/PriorWeek*100,WeekReporting*100) I'll be honest I'm not sure If I still need the final eval, so any other suggestions that will make this more efficient I'll gladly accept. I appreciate any and all tips or help to make this work. Cheers, Michael
Hello All, Can someone please help me understand the functionality of AUTO_KV_JSON? Does it modify the functionality of KV_MODE? If so, under what scenario? From my experiments it doesn't seem ... See more...
Hello All, Can someone please help me understand the functionality of AUTO_KV_JSON? Does it modify the functionality of KV_MODE? If so, under what scenario? From my experiments it doesn't seem to do anything when traversing the permutations of INDEX_EXTRACTIONS, KV_MODE and AUTO_KV_JSON. We typically just use the following settings in our sourcetype definitions: KV_MODE=json AUTO_KV_JSON=true   Here is the table of results Appreciate a response. Thank you.  
I'm trying to extract a report for devices in my network. Home assistant sends a log record with a value of 1 when a device is present and 0 when it's not, but sometimes it loses the record of the de... See more...
I'm trying to extract a report for devices in my network. Home assistant sends a log record with a value of 1 when a device is present and 0 when it's not, but sometimes it loses the record of the devices. On the sample data below, I need to find the first (value=0) occurrence after the last (value=1) for each "Friendly Name" per day. Can someone help me with this, please ? TimeStamp Friendly Name value 9/3/22 12:48 User B 0 9/3/22 11:58 User B 0 9/3/22 10:32 User B 1 9/3/22 10:27 User B 0 9/3/22 7:44 User B 1 9/3/22 7:22 User B 1 9/3/22 0:15 User B 0   In this case I need the second record.