All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to group events with similar pattern of error messages .  This is how the data looks like Message|Count Error replaying queued events: undefined                                              ... See more...
I want to group events with similar pattern of error messages .  This is how the data looks like Message|Count Error replaying queued events: undefined                                                1 initConfig is missing!                                                                                           1 "Error loading https://www.example.com/123 timeTaken=1 ms"  1 "Error loading https://www.example.com/123 timeTaken=2 ms"  1 Expected Output Message|Count Error replaying queued events: undefined 1 initConfig is missing!                                            1 "Script Load Error"                                                2 This is the query i am using  | eval Message.msg=case(like(Message.msg,"Error loading https://%"), "Script loading Error", 1=1, Message.msg) | stats count by Message.msg
Hi, we tried to integrate our splunk Search head with SAML authentication, but we got the error of saml response does not contain group information. i'm not sure if we there is missing or mistake i... See more...
Hi, we tried to integrate our splunk Search head with SAML authentication, but we got the error of saml response does not contain group information. i'm not sure if we there is missing or mistake in our configuration. i  attached  the screenshots of our configuration to see if you can help.    
Adding a hyperlink to a Splunk Cloud dashboard, which contains a HTTP Authorization request header.   ...I have a requirement to generate a Splunk dashboard, which consists of a table of data, of w... See more...
Adding a hyperlink to a Splunk Cloud dashboard, which contains a HTTP Authorization request header.   ...I have a requirement to generate a Splunk dashboard, which consists of a table of data, of which the last column contains a hyperlink, which when clicked would initiate a call out to an external applications API, sending a HTTP Authorization request header, containing the credentials to authenticate. Is this possible using Splunk Cloud?   Any pointers would be greatly appreciated...  
I am attempting to create a workflow for our work.  Currently we have 100s of panels in our dashboards. I have developed a solution to grab the searches of these panels leaving me with 100s of searc... See more...
I am attempting to create a workflow for our work.  Currently we have 100s of panels in our dashboards. I have developed a solution to grab the searches of these panels leaving me with 100s of searches I now want to save as a report and run at midnight.  Not sure if anyone's working with something similar but I need two things met: 1. Create a report with a passed search query by api. 2. Set a schedule to the created report by api.  I'm hoping Splunk has this inbuilt, if not I was thinking of creating a utility to make this but would love to save cycles! 
Just looking for a simple way to do this.  I have an input token of how many days to look back where I want to just specify a full day with a days ago selection.  | join type=outer name [search d... See more...
Just looking for a simple way to do this.  I have an input token of how many days to look back where I want to just specify a full day with a days ago selection.  | join type=outer name [search daysago=60 enddaysago=59 works in a manual search when I just put in 60 & 59 But when I do a chart w/ an input panel | join type=outer name [search daysago=$day$ enddaysago=($day$-1) I've tried an eval prior to the time eval daybefore=$day$-1 and this doesnt work either.  Seems like there should be a quick way to do this but just setting a token doesnt seem to be a place in the sourcecode where its allowed w/ a text input
HI Team, Need one help, I want to run a schedule for the below search events  every 1 hr and capture the inportant fields  like responseStatus, requestMethod, requestURL, servicePath, Total request,... See more...
HI Team, Need one help, I want to run a schedule for the below search events  every 1 hr and capture the inportant fields  like responseStatus, requestMethod, requestURL, servicePath, Total request, hour, day,  etc. and write to outputfile in csv. So that I can use this report for my dashboards.  The idea behind this is becase, our application logs millions of events per day and if we want to look for the historical data for reports, it takes long time to run and load the dashboard. I want to run this every hour and append the data it in the existing csv file. I tried the lookup but didn't work for me. Any solutions welcome time=2021-04-14T17:57:07+00:00 requestId=751411798490203 traceId=751411798490203 servicePath="/ecp/" remoteAddr=71.74.45.8 clientIp=24.161.128.196 clientAppVersion=NOT_AVAILABLE app_version=- apiKey=72c07648-ea14-34f2-abed-e38263580b5c oauth_leg=2-legged authMethod=oauth apiAuth=true apiAuthPath=/ecp/ oauth_version=1.0 target_bg=default requestHost=api.spectrum.net requestPort=8080 requestMethod=GET requestURL="/ecp/entitlements/v2/entitlements?divisionId=NEW.004&accountNumber=28290420" requestSize=560 responseStatus=200 responseSize=8422 responseTime=0.025 userAgent="IPVS" mapTEnabled="F" charterClientIp="V-1|IP-24.161.128.196|SourcePort-|TrafficOriginID-24.161.128.196" sourcePort="" oauth_consumer_key="72c27648-ea14-44f2-abed-e38263580b5c" x_pi_auth_failure="-" pi_log="pi_ngxgw_access"
Hello, i'm triing to use an UF to forward HEC from internet data to another UF in our DMZ look like : httplistner input (UF1) httpout output  --> httplistner input (UF2 in DMZ) S2S output --> Splu... See more...
Hello, i'm triing to use an UF to forward HEC from internet data to another UF in our DMZ look like : httplistner input (UF1) httpout output  --> httplistner input (UF2 in DMZ) S2S output --> Splunk enterprise in lan if i curl both of http listener i got success,  curl -k -u "x:TOKEN" "https://UF1:8088/services/collector/event" -d '{"event": "Hello, world!"}' {"text":"Success","code":0} curl -k -u "x:TOKEN" "https://UF2:8088/services/collector/event" -d '{"event": "Hello, world!"}' {"text":"Success","code":0} But i got events in my splunk indexeur only on the second curl, the first one look like the output never forward to the UF2...  Nothing in both uf1-2 logs about errors.  My /opt/splunkforwarder/etc/system/local/outputs.conf on UF1 look like: [tcpout] defaultGroup = default-autolb-group disabled = 1 [httpout] disabled = 0 httpEventCollectorToken = MYTOKEN uri = https://UF2-IP:8088 batchSize = 65536 batchTimeout = 5   Thks for help !! Flo V.
I am trying to limit the hot/warm index size for several indexes using the homepath.maxDataSizeMB command, however we restarting splunk I get the following error: Invalid key in stanza [main] in /in... See more...
I am trying to limit the hot/warm index size for several indexes using the homepath.maxDataSizeMB command, however we restarting splunk I get the following error: Invalid key in stanza [main] in /indexes.conf, line 112: homepath.maxDataSizeMB (value: 5000). Here is the stanza I have configured for that index. [main] homePath = volume:hot/defaultdb/db coldPath = volume:cold/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb homepath.maxDataSizeMB = 5000 I feel like I'm missing something simple and I can't figure out what it is.
I need  to show first 40 events pear seconds in the range 15 minutes.  
I am interested in configuring Heavy forwarder to send to additional destination third party like Syslog-NG using TCP/SSL. I reviewed the docs and I did not see option for specifying TCP/SSL, only t... See more...
I am interested in configuring Heavy forwarder to send to additional destination third party like Syslog-NG using TCP/SSL. I reviewed the docs and I did not see option for specifying TCP/SSL, only tcp or udp : https://docs.splunk.com/Documentation/Splunk/8.1.3/Forwarding/Forwarddatatothird-partysystemsd#Forward_syslog_data_to_a_third-party_host Is this possible, can someone share their output conf stanza. thanks
Hello, In a distributed environment with Universal Forwarder, Heavy Forwarder and Indexers, like this one: UF --> HF --> IDX How do you set useACK=true in outputs.conf ? Is it needed to be enable... See more...
Hello, In a distributed environment with Universal Forwarder, Heavy Forwarder and Indexers, like this one: UF --> HF --> IDX How do you set useACK=true in outputs.conf ? Is it needed to be enabled both on Universal Forwarder and Heavy Forwarder? We currently have it enabled only on the Heavy Forwarder. Thanks a lot, Edoardo
Hello, I recently made my first Splunk App, I had to go back and make some changes so I repackaged it and changed the build number. When I went to upload my new app (APPS>INSTALL APP FROM FILE> ) I ... See more...
Hello, I recently made my first Splunk App, I had to go back and make some changes so I repackaged it and changed the build number. When I went to upload my new app (APPS>INSTALL APP FROM FILE> ) I selected the "Upgrade app. Checking this will overwrite the app if it already exists." checkbox. When I reviewed the app none of my changes applied. I then checked the file directory of the Splunk app and found a Default.old folder.  Can someone explain to me what is happening? Why isn't my updated app showing? What do I need to do in the future so that my changes show? App Directory Thank you, Marco 
When requesting an upgrade of the Force Directed app from v3.0.1 to v3.0.3, Splunk Support replied that the app failed verification and cannot be installed in Splunk Cloud.  They also indicated that ... See more...
When requesting an upgrade of the Force Directed app from v3.0.1 to v3.0.3, Splunk Support replied that the app failed verification and cannot be installed in Splunk Cloud.  They also indicated that there's not channel to escalate issues to Splunk Works, so I'm posting here instead, and hopefully it gets to the right folks? They indicated the issue is as follows:   << Auto-Generated FAILED vetting comment: Vet: #3767 v3.0.3 "Force Directed App For Splunk" Review fails vetting and cannot be installed. This is a preliminary report. More issues may be found upon further review. Thank you for your app install request. Your app did not meet security and functionality requirements for Splunk Cloud for the following reasons: No default stanza or No values are allowed to defined before the first stanza. File: README/savedsearches.conf.spec Line:3 Once these issues are remedied you can resubmit your app for review. In addition the following changes are recommended: null Alternatives: null >>     They then referenced an app inspect URL that I suppose must be Splunk-internal only, as it doesn't open/resolve: https://review.us.appinspect.splunk.com/reviews/8562/appinspect-report  
Hi,  I wanted to update splunk_security_essentials app (3.2.2 to 3.3.2) : after I did the restart, I have this error under all searches :  "Could not load lookup=LOOKUP-splunk_security_essentials... See more...
Hi,  I wanted to update splunk_security_essentials app (3.2.2 to 3.3.2) : after I did the restart, I have this error under all searches :  "Could not load lookup=LOOKUP-splunk_security_essentials" I found out that there is an automatic lookup set like that :  I did a btool command and see this : opt/splunk/bin/splunk btool props list --debug |grep LOOKUP-splunk_security_essentials /opt/splunk/etc/apps/Splunk_Security_Essentials/default/props.conf LOOKUP-splunk_security_essentials = sse_content_exported_lookup search_title AS search_name OUTPUTNEW What can I  do to remove this error ?  Thanks for your help! 
Hi, We have been migrating objects from Splunk 7.3.9 to Splunk 8.X and have found some strange issue, hope someone has a clue. So basically we have a lookup file with a definition using cidr match.... See more...
Hi, We have been migrating objects from Splunk 7.3.9 to Splunk 8.X and have found some strange issue, hope someone has a clue. So basically we have a lookup file with a definition using cidr match. The csv contains, among other fields, an ip, cidr and subnet columns. Ex: ip cidr subnet 10.1.1.2 10.1.1.2/32 10.1.1.0/24   This is on "Lookup Definition" match type: CIDR(cidr)   However if I try to do this simple query: | makeresults | eval ip="10.1.1.2" | table ip | lookup <lookup_name> cidr as ip OUTPUT subnet, it doesn't work. The exact same thing is working properly in splunk 7.3.9. Any clue?   Kind regards, Tiago
Hi there, App Inspect v. 2.4.0.dev13 gives me this failure: [TRANSFORMS-extract-fields] setting in props.conf specified a regex without any named capturing group. This is an incorrect usage. Please... See more...
Hi there, App Inspect v. 2.4.0.dev13 gives me this failure: [TRANSFORMS-extract-fields] setting in props.conf specified a regex without any named capturing group. This is an incorrect usage. Please include at least one named capturing group. File: default/props.conf Line Number: 2 The regex affected are:     [extract-queue-statistics] REGEX = ^.*rsyslogd-pstats\:\sim(?P<protocol>\w+)\W+(?P<port>\d+)\W\:\ssubmitted=(?P<submitted>\d+).*$ [extract-port-submitted] REGEX = ^.*rsyslogd-pstats\:\s(?P<queue>[^:]+)\:\ssize=(?P<size>\d+)\senqueued=(?P<enqueued>\d+)\sfull=(?P<full>\d+)\sdiscarded\.full=(?P<discarded_full>\d+)\sdiscarded\.nf=(?P<discarded_nf>\d+)\smaxqsize=(?P<maxqsize>\d+).*$   How could I pass validation? I need to deploy this app on Splunk Cloud.
Hi there, I ran a Health Check from the Splunk Master Server and noticed that there were 240 orphaned knowledge objects on the Search Head Cluster Deployment Server. However when logging in to the G... See more...
Hi there, I ran a Health Check from the Splunk Master Server and noticed that there were 240 orphaned knowledge objects on the Search Head Cluster Deployment Server. However when logging in to the GUI of this server I saw 513 orphaned knowledge objects. As far as I understood the objects are being detected as orphaned if the user account is not Enabled. The false positive detections are all associated with enabled user accounts. Do you have any suggestions how I can troubleshoot that issue? Thanks, O
Greetings!   I need your support on how I can create Splunk SIEM rules to detect future attack as requested to this below link:   https://thehackernews.com/2021/04/detecting-next-solarwinds-attac... See more...
Greetings!   I need your support on how I can create Splunk SIEM rules to detect future attack as requested to this below link:   https://thehackernews.com/2021/04/detecting-next-solarwinds-attack.html   Your help will be most appreciated, thanks in advance! Best Regards Pacy    
I have created a custom search app/view using Java script. I would like to include the bar called "search results tabs" as similar to search app. I have attached the screenshot of  the same. Below i... See more...
I have created a custom search app/view using Java script. I would like to include the bar called "search results tabs" as similar to search app. I have attached the screenshot of  the same. Below is the options which I am expecting in my custom search app I have created my custom search app using the Java script from Splunk website - Here is the link Could anyone please help me how to include that option in my Java script
Hi, How am I able to add the BusinessUnit Column on my splunk query below?  The output  index=xxxxxxx sourcetype=xxxxxxx | multikv forceheader=1 | dedup ACCOUNT_CODE DATE MVS_SYSTEM_ID CALCMIPS | e... See more...
Hi, How am I able to add the BusinessUnit Column on my splunk query below?  The output  index=xxxxxxx sourcetype=xxxxxxx | multikv forceheader=1 | dedup ACCOUNT_CODE DATE MVS_SYSTEM_ID CALCMIPS | eval DATE=strftime(strptime(DATE,"%d%b%y"),"%Y-%m-%d") | lookup Account_file.csv ACCOUNT_CODE OUTPUT Application BusinessUnit ApplicationRTO | eval _time=strptime(DATE." "."00:00:00","%Y-%m-%d %H:%M:%S") | table _time Application BusinessUnit MVS_SYSTEM_ID CALCMIPS | chart avg(CALCMIPS) by Application DATE limit=0 Output should look like something below with BusinessUnit after Application. Thanks and Regards,