All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I'm hoping that someone can help / point me in the right direction. I have two events which are being fed into Splunk, one being a raise of an event flag, the other being the removal of the... See more...
Hi all, I'm hoping that someone can help / point me in the right direction. I have two events which are being fed into Splunk, one being a raise of an event flag, the other being the removal of the event flag. Raising Sep 2 10:32:45 SOFTWARE CEF:0|SOFTWARE|CLIENT|42|Agent Log Event|Agent Log Event|high|id=123 shost=Management start=2022-09-02 10:32:42 cs1Label=Affected Agents cs1=[SERVERNAME] (ip: None, component_id: ID) msg='AgentMissing' status flag was raised Removal Sep 2 10:34:33 SOFTWARE CEF:0|SOFTWARE|CLIENT|42|Agent Log Event|Agent Log Event|high|id=123 shost=Management start=2022-09-02 10:34:33 cs1Label=Affected Agents cs1=[SERVERNAME] (ip: None, component_id: ID) msg='AgentMissing' status flag was removed After some browsing online & through the Splunk support pages I have been able to put together the following query:     (index=[INDEX] *agentmissing*) ("msg='AgentMissing' status flag was raised" OR "msg='AgentMissing' status flag was removed") | rex field=_raw ".*\)\s+(?<status>.*)" | stats latest(_time) as flag_finish by connection_type | join connection_type [ search index=[INDEX] ("msg='AgentMissing' status flag was raised") connection_type=* | stats min(_time) as flag_start by connection_type] | eval difference=flag_finish-flag_start | eval flag_start=strftime(flag_start, "%Y-%m-%d %H:%M") | eval flag_finish=strftime(flag_finish, "%Y-%m-%d %H:%M") | eval difference=strftime(difference,"%H:%M:%S") | table connection_type, flag_start, flag_finish, difference | rename connection_type as Hostname, flag_start as "Flag Raised Time", flag_finish as "Flag End Time", difference as "Total Time" | sort - difference     The above is working, however as I am using the "stats latest" command it is only showing the latest occurrence of the event. However, I would like to display the time between these events for multiple occurrences. So as an example of the above, it was between 7:47 & 9:31, I would also like to see flags for other time occurrences. TIA!
I have to decrease the fields names font size, like subgroup, platforms, bkcname etc.. (all fields present in the table) & make the count bold which is present in table.   But i want change only in... See more...
I have to decrease the fields names font size, like subgroup, platforms, bkcname etc.. (all fields present in the table) & make the count bold which is present in table.   But i want change only in one particular table, not all the tables presents in the dashboard. <row> <panel> <title>Platform wise Automation Status Summary</title> <table> <search> <query>index=network_a I want change the above table(Platform wise Automation Status Summary) Any help would be greatly appreciated!!
Hi, I'm trying to extract some fields from my Access Point Aruba in order to be CIM compliant. For authentication log I have two kinds of event:   Login failed: cli[5405]: <341004> <WARN> AP:... See more...
Hi, I'm trying to extract some fields from my Access Point Aruba in order to be CIM compliant. For authentication log I have two kinds of event:   Login failed: cli[5405]: <341004> <WARN> AP:ML_AP01 <................................>  Client 60:f2:62:8c:a8:a7 authenticate fail because RADIUS server authentication failure Login success: stm[5434]: <501093> <NOTI> AP:ML_AP01 <..................................> Auth success: 60:f2:62:8c:a8:a7: AP ...................................ML_AP01   My goal is to extract the mac address after "Client" in the first log and the mac after "Auth success" in the second one in a common field called "src", can someone please help me? Thanks in advance!
Hi all, i have the json data as below.   { "Info": { "Unit": "ABC", "Project": "XYZ", "Analysis Summary": { "DB 1":{"available": "1088kB","use... See more...
Hi all, i have the json data as below.   { "Info": { "Unit": "ABC", "Project": "XYZ", "Analysis Summary": { "DB 1":{"available": "1088kB","used": "172.8kB","used%": "15.88%","status":"OK"}, "DB2 2":{"available": "4096KB","used": "1582.07kB","used%": "38.62%","status":"OK"}, "DB3 3":{"available": "128KB","used": "0","used%": "0%","status":"OK"}, "DB4 4":{"available": "16500KB","used": "6696.0KB","used%": "40.58%","status":"OK"}, "DB5 5":{"available": "22000KB","used": "9800.0KB","used%": "44.55%","status":"OK"} } }}   I want to create a table like this   Database available used used% status DB1 4096KB 1582.07kB 38.62% OK DB2 1088kB 172.8kB 15.88% OK DB3 16500KB 6696.0KB 40.58% OK DB4 22000KB 9800.0KB 44.55% OK DB5 128KB 0 0% OK   I know how to extract the data but i am not able to put data in this format in table. Anyone have idea on this?
Hi, I have installed the Splunk forwarder in AIX server and successfully see the server level results(CPU ,DF ,Memory) in the dashboard, But am planning to install the Splunk add on for WebSphere pr... See more...
Hi, I have installed the Splunk forwarder in AIX server and successfully see the server level results(CPU ,DF ,Memory) in the dashboard, But am planning to install the Splunk add on for WebSphere process server 7.0 version, May I know "Splunk Add on WebSphere application server" will for the older version  of WebSphere process server? Your inputs will be appreciated .
Hello Splunk Enjoyers! I have problem Information about routers arrives every minute, so  What I have:  name_of_router and serial_number of client on index = routers What i want: i want to make... See more...
Hello Splunk Enjoyers! I have problem Information about routers arrives every minute, so  What I have:  name_of_router and serial_number of client on index = routers What i want: i want to make alert, if serial_number has changed.  How should i do this? @splunk     
Hi all,  I wish to generate login times for a list of users which are specified in a lookup table titled user_list.csv. The column header of the list of users in this list is called "IDENTITY". C... See more...
Hi all,  I wish to generate login times for a list of users which are specified in a lookup table titled user_list.csv. The column header of the list of users in this list is called "IDENTITY". Currently, I have an index that on its own without inserting the lookup table, already has a field called "Identity". This index itself gives me any users' login times within the specified timeframe as long as I specify Identity="*". Without specifying Identity="*" or any other user's names, the events will not populate. What I am trying to do is to input a specified list of users and be able to check their login times. However when I use the following search query, I end up getting 0 events:   index=logintime  [|inputlookup user_list.csv |fields IDENTITY |format] IDENTITY="*" | table _time, eventType, ComputerName, IDENTITY   I have already checked that the lookup table is within the same app. Please help, thank you.
Hi, I have a metric with 1 dimension containing an integer value. I need to apply some calculation to the metric based on the dimension value. The formula to apply to each Data Point would be s... See more...
Hi, I have a metric with 1 dimension containing an integer value. I need to apply some calculation to the metric based on the dimension value. The formula to apply to each Data Point would be sth like this:     metric_value*100/dimensionA_value   I have seen dimensions extensively used as filters but I was not able to find a way to reference the dimension value so that I can use it in a calculation like the one above.   Any idea, how could I accomplish that?   Thanks in advance Cesar
I am getting  "The search job terminated unexpectedly" in the dashboard. In search, the index is working fine.  And this happens in one dashboard only other dashboards are working fine. Anoth... See more...
I am getting  "The search job terminated unexpectedly" in the dashboard. In search, the index is working fine.  And this happens in one dashboard only other dashboards are working fine. Another Dashboard I don't know what the reason for this issue is. Please anyone help me. Thanks in Advance
Hi , How to extract the open episodes with service now incident against each episode in Splunk itsi Thanks!  
Finally we migrated away for Microsoft Azure Add-on for Splunk to Splunk Add-on for Microsoft Cloud Services. In Microsoft Azure Add-on for Splunk  Inputs conf.  it was possible to specify manually... See more...
Finally we migrated away for Microsoft Azure Add-on for Splunk to Splunk Add-on for Microsoft Cloud Services. In Microsoft Azure Add-on for Splunk  Inputs conf.  it was possible to specify manually Event Hub Sourcetype, but in Splunk Add-on for Microsoft Cloud Services  we need to choose  the value.  The problem  is that we need the values azure:ad_signin:eventhub and azure:ad_audit:eventhub  but Splunk Add-on for Microsoft Cloud Services provides only mscs:azure:eventhub.   Based on log information from Azure  there is Category field with the values (SignInLogs,AuditLogs).  And from it I can specify which is Audit log and which is Signin log and change SourceType for each of log type. On Heavy Forwarder where App is deployed  (/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/default)  i added the following config. But nothing changed source type stays mscs:azure:eventhub. Any ideas what I'm missing? props.conf [mscs:azure:eventhub] TRANSFORMS-rename = SignInLogs,AuditLogs transforms.conf [SignInLogs] REGEX =  SignInLogs SOURCE_KEY = field:category DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::azure:ad_signin:eventhub WRITE_META = true [AuditLogs] REGEX =  AuditLogs SOURCE_KEY = field:category DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::azure:ad_audit:eventhub WRITE_META = true    
Hi All, Is there a way in which Splunk can generate an alert when backup and restoration exercises are conducted. Any use case that can do this? Any assistance on this would be appreciated.
Hi, is there a way to use CSS to fix the font size of text in the Status Indicator?
Hi  We are using Vmware carbon black cloud app and the vmware logs are pulled from AWS s3 buckets. The index is having logs. However, the dashboards of the app when configured with same index is no... See more...
Hi  We are using Vmware carbon black cloud app and the vmware logs are pulled from AWS s3 buckets. The index is having logs. However, the dashboards of the app when configured with same index is not working. Please help remediate. Thanks  
HI, I have a scheduled alert that sends out an email every 7days. The Sys admin turned off the server for whatever reason and forgot to turn it back up. Obviously, the report didn't trigger.  Is ... See more...
HI, I have a scheduled alert that sends out an email every 7days. The Sys admin turned off the server for whatever reason and forgot to turn it back up. Obviously, the report didn't trigger.  Is it possible to get/ generate the report that was supposed to come in? Am at a loss here just finding out today. Thanks
Greetings,   I've been asked to provide log data for a specific form that has been accessed over a certain time period. As the data are going to leave our organization, I want to filter it down to ... See more...
Greetings,   I've been asked to provide log data for a specific form that has been accessed over a certain time period. As the data are going to leave our organization, I want to filter it down to only the relevant data.   I'm looking for events in which a certain html form has been accessed. I want to display events shortly before and after, that show the same user agent.   I've attempted a few things, but in the latest query I attempted to utilize the map function. I'm not sure why I receive the error "Error in 'map': Did not find value for required attribute 'useragent'. "     index=iis_logs [search "https://example.com/form.html" | eval start=_time-15 | eval stop=_time+30 | eval useragent=_cs_User_Agent | map search="search index=* cs_User_Agent=$useragent$ earliest=$start$ latest=$stop$"] Edit: When I, for example, put cs_User_Agent=*Mozilla*, there are results surrounding the relevant events. But that is not the data I am looking for.  
Hello all, Hoping someone may be able to help. I have an internal tool I have an export from in the from of a CSV that has a column named ip. I uploaded this as a inputlookup (name.csv). I verifed I ... See more...
Hello all, Hoping someone may be able to help. I have an internal tool I have an export from in the from of a CSV that has a column named ip. I uploaded this as a inputlookup (name.csv). I verifed I can see the ip information by |inputlookup (name.csv) and the rows of IP addresses show. I have a base search that returns data , and I want to see if any of the src, or dest IP's from my search match the IP addresses listed in my name.csv. I keep running into a search, that returns a few thousand events, although I can search the event between src, and dest and it shows without the lookup. Currently my search looks like this: (index=name1 OR index=name2 OR index=name3) src_ip_country=United States action=allowed | stats count by src, dest | sort count | reverse | lookup name.csv ip OUTPUT target_ip | table target_ip, src, dest   This search provides me a tabled output with src, and dest fields populated, but nothing in the "target_ip" field.  Any ideas? Thank you.
Hi, REX command rex mode=sed to remove quotation marks and numbers inside of them   OUTPUT file "19214132.IKU" copied to output directory OUTPUT file "19315133.IKU" copied to output directory... See more...
Hi, REX command rex mode=sed to remove quotation marks and numbers inside of them   OUTPUT file "19214132.IKU" copied to output directory OUTPUT file "19315133.IKU" copied to output directory OUTPUT file "19416134.IKU" copied to output directory ....   Desired result ->   OUTPUT file .IKU copied to output directory  
I am working to leverage the below query for 'Stale Account Usage' from Splunk Security Essentials Docs, which uses lookup "account_status_tracker". The  'How to Implement' guidance includes: "The ... See more...
I am working to leverage the below query for 'Stale Account Usage' from Splunk Security Essentials Docs, which uses lookup "account_status_tracker". The  'How to Implement' guidance includes: "The only step you'll need to take is to create a lookup called account_status_tracker, and have authentication data in Common Information Model format. "  From the "Add New" lookup webpage, it is not clear how I assign an appropriate "Lookup File" that will the necessary fields in CIM format. I have looked through Splunk docs and other likely resources, with no strong hits. I admit this is an area new to me.  My question is: what steps do I need to take to define this lookup, including assigning an appropriate "Lookup File"?  When I select existing authentication-related files as the "Lookup File", I receive error messages, for example:  "Cannot find the destination field 'count' in the lookup table... And leads greatly appreciated.  index=* source="*WinEventLog:Security" action=success | stats count min(_time) as earliest max(_time) as latest by user | multireport [| stats values(*) as * by user | lookup account_status_tracker user OUTPUT count as prior_count earliest as prior_earliest latest as prior_latest | where prior_latest < relative_time(now(), "-90d") | eval explanation="The last login from this user was " . (round( (earliest-prior_latest) / 3600/24, 2) ) . " days ago." | convert ctime(earliest) ctime(latest) ctime(prior_earliest) ctime(prior_latest) ] [| inputlookup append=t account_status_tracker | stats min(earliest) as earliest max(latest) as latest sum(count) as count by user | outputlookup account_status_tracker | where this_only_exists_to_update_the_lookup='so we will make sure there are no results']
Dear Splunk community: So i have the following SPL that has been running fine for the last week or so however, all of a sudden i am getting the last unwanted column (Value) which i don't expect t... See more...
Dear Splunk community: So i have the following SPL that has been running fine for the last week or so however, all of a sudden i am getting the last unwanted column (Value) which i don't expect to get. Can you please explain, what i need to modify so that i don't get the last Value column?   <my serch> | chart count by path_template, http_status_code | addtotals fieldname=total | foreach 2* 3* 4* 5* [ eval "percent_<<FIELD>>"=round(100*'<<FIELD>>'/total,2), "<<FIELD>>"=if('<<FIELD>>'=0 , '<<FIELD>>', '<<FIELD>>'." (".'percent_<<FIELD>>'."%)")] | fields - percent_* total   Here is what is see: Really appreciate your help on this! Thanks!