All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hai , I have total 6 checkboxes in my dashboard,   by default first checkbox is in active , if I check the second check box , first checkbox should be unchecked automatically, in the same way i... See more...
Hai , I have total 6 checkboxes in my dashboard,   by default first checkbox is in active , if I check the second check box , first checkbox should be unchecked automatically, in the same way if I check the third one that third checkbox  should be in active , remaining all should be inactive. Every time only one checkbox should be Active (i.e., what ever I checked) please help me, it would be appreciated, Thankyou.
Hi All, I am using below query to search for certain logs: index=int_gcg_apac_solace_166076 host="mwgcb-csrla0*U*" source="/logs/confluent/connect-distributed/apac/TW/*" "Task is being killed and w... See more...
Hi All, I am using below query to search for certain logs: index=int_gcg_apac_solace_166076 host="mwgcb-csrla0*U*" source="/logs/confluent/connect-distributed/apac/TW/*" "Task is being killed and will not recover until manually restarted" | rex field=_raw "(?ms)id\=(?P<Connector>(\w+\.){1,9}\w+\-\d)\}" | lookup region_lookup.csv "source" But while using the command | lookup region_lookup.csv "source", its not getting me any result based on the lookup table for the Region.  I am trying to create a query using lookup table which will be as below: source Region /logs/confluent/connect-distributed/apac/HK/* HongKong /logs/confluent/connect-distributed/apac/SG/* Singapore /logs/confluent/connect-distributed/apac/AU/* Australia /logs/confluent/connect-distributed/apac/VN/* Vietnam /logs/confluent/connect-distributed/apac/MY/* Malaysia /logs/confluent/connect-distributed/apac/ID/* Indonesia /logs/confluent/connect-distributed/apac/TH/* Thailand /logs/confluent/connect-distributed/apac/TW/* Taiwan   Note: Each source have multiple folders inside it e.g. "logs/confluent/connect-distributed/apac/TW/*" will have file paths like "logs/confluent/connect-distributed/apac/TW/kafkaconnect.log", "logs/confluent/connect-distributed/apac/TW/kafkaconnect.log1", "logs/confluent/connect-distributed/apac/TW/kafkaconnect.log2" and so on..  And the searched indicator "Task is being killed and will not recover until manually restarted" may go into any of the folders. Is there any way I can use, so that I can use the lookup table as desired..? Your kind advise will be highly appreciated.. Thank You..!!
Is there a trick to adding  "value", "type" and "color" to the 'to' entities in this viz?  I'm only getting the default values for recipients in one way relationships.  
I have a query   index = "index1" |spath output=error_code input=RAW_DATA path=MsgSts.Cd |dedup SESSIONID |stats count as Total sum(eval(if error_code=2,1,0))) as Error by OPERATION |eval Rate = ro... See more...
I have a query   index = "index1" |spath output=error_code input=RAW_DATA path=MsgSts.Cd |dedup SESSIONID |stats count as Total sum(eval(if error_code=2,1,0))) as Error by OPERATION |eval Rate = round ((Error/Total)*100,2) |search Rate>20 |table OPPERAION Rate Error   And the table is OPERATION | Rate   | Error VerifyOTP     | 24.08 | 310 Which is what I want because I want to know which OPERATION have more than 20% error rate in a certain time range. But now the hard part, is I want an alert to send to my email the details of all 310 errors event that show above. Since I use stats command, the only information I got left is Total, Error, Rate and OPERATION. How do I get the detail events when the rate hit >20% according to my search ?
Hello, I need a help with using wildcards in lookup.  I want to exclude from search results fields, which are located in lookup. Example: Search: host=*  | search NOT [| inputlookup test_lookup.c... See more...
Hello, I need a help with using wildcards in lookup.  I want to exclude from search results fields, which are located in lookup. Example: Search: host=*  | search NOT [| inputlookup test_lookup.csv] Lookup test_lookup.csv containts two fields: exe,comm /usr/sbin/useradd,*   I need to exclude all results which include exe="/usr/sbin/useradd" and any *comm* field. I added WILCARD(comm) in lookup definitions, but it doesn't work. transforms.conf [test_lookup.csv] batch_index_query = 0 case_sensitive_match = 1 filename = test_lookup match_type = WILDCARD(comm)   What I did wrong? Thank you.
Can I use perpetual free license for commercial purpose. @splunk 
index="www1" sourcetype="access_combined_wcookie" action=* status<=400 | timechart span=1d count(action) by clientip useother=f | addtotals | eval type = if(Total>90 ,"UP","DOWN") | fields _time ... See more...
index="www1" sourcetype="access_combined_wcookie" action=* status<=400 | timechart span=1d count(action) by clientip useother=f | addtotals | eval type = if(Total>90 ,"UP","DOWN") | fields _time 194.* *.*.*.* Total type | sort - _time   I want to change the order of the x-axis field names when using it. | fields _time 194.* *.*.*.* Total type   Is there any other way than this?
Hi, I would like to extract particular digit from brackets, index it as follows and based on that create stats hourly. Each time is picking this up with bracket as a string. This is service whic... See more...
Hi, I would like to extract particular digit from brackets, index it as follows and based on that create stats hourly. Each time is picking this up with bracket as a string. This is service which is making entry every hour, once will recognize to add up will present digit , if not will be 0. My goal would be to have stats from every hour on the graph to see how does it changes.  
My Seach Head receice Windoweventlog://Application and system but it's not found [Windowseventlog://Security]. I'm using Splunk_TA_windows. This is my config inputs.conf in local. [WinEventLog://App... See more...
My Seach Head receice Windoweventlog://Application and system but it's not found [Windowseventlog://Security]. I'm using Splunk_TA_windows. This is my config inputs.conf in local. [WinEventLog://Application] -> it works disabled = 0 index = wineventlog start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 renderXml=false [WinEventLog://Security] -> not work disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" renderXml=false index = wineventlog [WinEventLog://System] -> it works disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 renderXml=false index = wineventlog ###### Forwarded WinEventLogs (WEF) ###### [WinEventLog://ForwardedEvents] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 ## The addon supports only XML format for the collection of WinEventLogs using WEF, hence do not change the below renderXml parameter to false. renderXml=flase host= WinEventLogForwardHost index = wineventlog This TA is copied from another server working fine. Even i'm using domain admin to run service but still not get windows event log security.
newuser = service.users.update(username=modify_username,roles=modify_role).refresh() AttributeError: 'Users' object has no attribute 'update'
I have an index = 'telemetry' which gets data from a local directory on standalone Splunk installation. I deleted some data from above index which came in from particular directory using command... See more...
I have an index = 'telemetry' which gets data from a local directory on standalone Splunk installation. I deleted some data from above index which came in from particular directory using command index='telemetry'  source="/data/01/*" | delete The above index has still more data from other sources (e.g. "/data/02" ..) I want to re-index the data from deleted directory i.e. "/data/01" again. Running splunk clean eventdata involves deleting entire index. I want to wipe from disk only that part of data that has been deleted above so that I can re-index it back. How can I achieve this?  
Hi, I'm facing an error where all my scheduled searches are triggered, there no logs showing any error while trying to send the email, BUT I did not receive any email. My splunk emailing system is ... See more...
Hi, I'm facing an error where all my scheduled searches are triggered, there no logs showing any error while trying to send the email, BUT I did not receive any email. My splunk emailing system is integrated with AWS SES and I'm able to send (and receive) email using 'sendemail' command from the splunk search which tells me that the SES credentials are correct.  Below are the logs showing that no errors while trying to send email. Thanks! 2021-08-11 22:56:01,898 +0000 INFO sendemail:139 - Sending email. subject="|prod-us-west-2| SplunkAlert: TSD SES Email Alert Test ", results_link="https://aws-prod-west-splunk.tscloudservice.com/app/TsdApp/@go?sid=scheduler__admin__TsdApp__RMD5cd8884db53568853_at_1628722560_51987_FDE004B7-D863-4836-AC24-BFAEBA400C09", recipients="[u'muhammad.mufthi@example.com']", server="email-smtp.us-west-2.amazonaws.com"
I need to restrict my Splunk instance to be only accessible on localhost. To do this, I created a new web.conf file and put it in /opt/splunk/etc/system/local. The file as the following contents: [s... See more...
I need to restrict my Splunk instance to be only accessible on localhost. To do this, I created a new web.conf file and put it in /opt/splunk/etc/system/local. The file as the following contents: [settings] server.socket_host = 127.0.0.1 When I restart splunk, I get the following: Waiting for web server at http://[random char]:8000 to be available........... Rather than seeing 127.0.0.1, I see random characters. It just sits there. What am I missing in the config file?  
I was able to build a large dashboard with 10+ panels using the loadjob command spanning the last day of any triggered results.  However, I am now looking to built the same dashboard where each panel... See more...
I was able to build a large dashboard with 10+ panels using the loadjob command spanning the last day of any triggered results.  However, I am now looking to built the same dashboard where each panel will span a week (7-days) of any triggered results. Loadjob was the only command that minimized loading of each panel.  Is there anyway to use loadjob, or a similar command, that shows a timechart spanning 7-days? For example: | loadjob savedsearch=tech123:Residential:"name of enabled alert" artifact_offset=0 | timechart span=1d count by incident_type But I've tried using earliest=-7d in every  possible spot and I've used the time picker, but I haven't found a resolution yet... any thoughts or ideas or solutions?
Hey Everyone, I wanted to see if anyone could help me with correlation searches firing and creating a notable event on the Incident Review page  but then not producing the same 1 for 1 match when I ... See more...
Hey Everyone, I wanted to see if anyone could help me with correlation searches firing and creating a notable event on the Incident Review page  but then not producing the same 1 for 1 match when I run the search manually. What I did was  look at a specific correlation search that fired in the Incident Review page over the last 24 hrs. I then took that search and ran it in a new search with the 24 hr time frame picker. The notable events said that 77 events for that correlation search existed but the search results would return either a 0 or varying numbers if let it finish and ran over and over a few times (none of them being 77). I made sure it wasn't a count issue where an event had multiple counts that in total added up to the total number but was only shown as one row. The issue seems to be the data models. I run the searches from the index(s) and get vastly different numbers than the Incident Review page which is vastly different than the data model correlation search. Does anyone have any ideas on why I'm not getting a 1=1=1 match between the Incident Review, correlation search with data models, and the raw index searches? Any and all help/insight is greatly appreciated!
I have MC on the ES & tried my SPLs but need your help please. I need to find the apps, name of skipped searches & why the searches were skipped? Thank u in advance.
By default, when you chart by '_time', the major ticks displayed over a multiple week time span always uses Mondays in the major tick labels.  Since I am allowing people to select individual days of ... See more...
By default, when you chart by '_time', the major ticks displayed over a multiple week time span always uses Mondays in the major tick labels.  Since I am allowing people to select individual days of the week for week over week comparisons, it would be nice to show the selected day of the week as the major tick label. Is there a charting control variable that can be used for this? thanks in advance, Bob Eubanks In the attached screen shot, the user has selected 'Fridays' for comparison, yet the chart uses Mondays as the major tick label.
I'm running a tiny proof-of-concept Splunk environment across 2 VMs. SE is on VM1 (Ubuntu 20.04), version 8.1.1. The universal forwarder is on VM2 (Ubuntu 20.04) and is sending the Splunk_TA_nix add-... See more...
I'm running a tiny proof-of-concept Splunk environment across 2 VMs. SE is on VM1 (Ubuntu 20.04), version 8.1.1. The universal forwarder is on VM2 (Ubuntu 20.04) and is sending the Splunk_TA_nix add-on metric data back just fine. I have installed/configured version 7.3 of the Splunk Stream Add-On for Stream Forwarders on the universal forwarder and installed the Splunk Stream App on the SE VM, also version 7.3.  On the forwarder there are the following conf files in /opt/splunkforwarder/etc/apps/Splunk_TA_stream/local: ----inputs.conf---- splunk_stream_app_location = https://10.0.2.15:8000/en-us/custom/splunk_app_stream/ stream_forwarder_id =  disabled = 0 --------------------------- ----streamfwd.conf---- port = 8889 ipAddr = 127.0.0.1 ---------------------------- I can't get the network stream data from the forwarder into the SE search/reporting app, or the SE Stream app. The /opt/splunkforwarder/var/log/splunk/streamfwd.log is the only thing from the stream add-on on the forwarder that will place any data in SE at all and includes an error that says: (CaptureServer.cpp:2211) stream.CaptureServer - unable to ping server (<longerrorcode>): Unable to establish connection to 10.0.2.15: wrong version number 8.1 should be compatible with the 7.3 installs of either stream app. Additionally I haven't seen anything mandating a specified version number anywhere.  Things I have tried: I can successfully ping SE at https://10.0.2.15:8000. Tried modifying the .conf files in apps/default on the forwarder, which the docs say you're not supposed to do. Didn't work. Tried all manner of switching port numbers in the .conf files. Restarted many, many times.  I am out of ideas. Someone please help?    
Is there a way to tell which users are hitting their search limits? I've looked for these events in the internal indexes but cant find them. Maybe a rest search?
I am new to the SOC environment. I was tasked to create a personal dashboard. What items/data should I put into the dashboard?