All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Team, We tried to integrate our Splunk enterprise LB URL using SAML authentication. We gave details such as Entity ID, LB URL, and Reply URL, and they generated metadata (XML), which we then u... See more...
Hello Team, We tried to integrate our Splunk enterprise LB URL using SAML authentication. We gave details such as Entity ID, LB URL, and Reply URL, and they generated metadata (XML), which we then uploaded to Splunk. After configuration, we received the following error. Please find the below SS error FYR. Could you please assist us with the mentioned integration part Please let us know if you need any other information. Regards, Siva.
Hello everyone, can anyone help me with how I can get the difference to the previous value from a device that sends me the total kwh number in order to be able to calculate a consumption per specifi... See more...
Hello everyone, can anyone help me with how I can get the difference to the previous value from a device that sends me the total kwh number in order to be able to calculate a consumption per specified time in the Splunk dashboard? Currently, I am only shown an ever-increasing value. Thank you very much!
Hi, since a couple of days i getting these errors from one of my search heads: "06-05-2024 14:33:35.300 +0200 WARN LineBreakingProcessor [3959599 parsing] - Truncating line because limit of 10000 b... See more...
Hi, since a couple of days i getting these errors from one of my search heads: "06-05-2024 14:33:35.300 +0200 WARN LineBreakingProcessor [3959599 parsing] - Truncating line because limit of 10000 bytes has been exceeded with a line length >= 11513 - data_source="/opt/splunk/var/log/splunk/audit.log", data_host="XXX", data_sourcetype="splunk_audit"" As far as i understood, i can set truncate value within the props.conf to a higher value. I just want to understand, why internal logs exceeds the line length. Can someone point me in the right direction why the audit logs exceeds this limit? thanks
hello, I have a problem that I'm not receiving data to some of my indexes when it is related to monitoring.  for the monitor I created an app in the server I pull the data from, it worked for a w... See more...
hello, I have a problem that I'm not receiving data to some of my indexes when it is related to monitoring.  for the monitor I created an app in the server I pull the data from, it worked for a while and now it stopped. the stanza of the inputs.conf looks like that: [monitor://\\<my_server_ip>\<folder>\*.csv] index=<my_index> disabled = 0 ignoreOlderThan = 2d sourcetype = csv source=<source_name>   it happens in 2 indexes of mine that have the same stanza structure. I checked the connection from my server to the monitor path and it was ok. I checked the _internal index for errors with no results. I opened wireshark no see any connections error which i didn't found any errors.   any ideas?
Hello. I want to deploy Splunk Enterprise on my machine. and I am installing Universal Forwarder and I can't figure out what the Deployment Server IP could be. should I make it the IP address of t... See more...
Hello. I want to deploy Splunk Enterprise on my machine. and I am installing Universal Forwarder and I can't figure out what the Deployment Server IP could be. should I make it the IP address of the Host Machine? need help.!!    
I have installed splunk es app and uploaded botsv1.stream_http.json (https://github.com/splunk/attack_data) but incident_review and ess_security_posture is not hitting any event how do I ma... See more...
I have installed splunk es app and uploaded botsv1.stream_http.json (https://github.com/splunk/attack_data) but incident_review and ess_security_posture is not hitting any event how do I make splunk es to check my uploaded logs and generate a list of alerts like below. Please note that I am not checking the logs forwarded by agent, but the log files uploaded on the browser side thank you
Hi,  Our Company is using Splunk Enterprise with 600GB/day for out SOC, and now we would lịke to use this license to extend for our new Private Cloud Managed SOC. Is it possible and legal with Splun... See more...
Hi,  Our Company is using Splunk Enterprise with 600GB/day for out SOC, and now we would lịke to use this license to extend for our new Private Cloud Managed SOC. Is it possible and legal with Splunk license term?
Hi all I am trying to add a text box and a button to a visualisation as a way a adding a 'commentary' on the chart. For example, if the chart shows something unusual, I'd like to be able to enter a ... See more...
Hi all I am trying to add a text box and a button to a visualisation as a way a adding a 'commentary' on the chart. For example, if the chart shows something unusual, I'd like to be able to enter a reason in the text box e.g. 'Some figures for this month are missing', then click the button and the current date and that comment from the box would be added to the lookup. I do currently have a solution of sorts but it's very clunky as it involves setting a token in a text box and then a html button which opens a URL but the URL is actually the search (search?q=%7Cmakresults%0A%7Ceval%20Date%3D...). This results in a new tab being opened and the
hello, i am a newbie . where are i can get the demo data to practice different attack detection in splunk enterprise? 
hello i have installed DVWA in my xamp server . practiced some Sql attack on DVWA . after that i typed  the following in Splunk search bar   but its showing any result .  index=dvwa_logs (error OR "... See more...
hello i have installed DVWA in my xamp server . practiced some Sql attack on DVWA . after that i typed  the following in Splunk search bar   but its showing any result .  index=dvwa_logs (error OR "SQL Injection" OR "SQL Error" OR "SQL syntax") OR (sourcetype=access_combined status=200 AND (search_field="*' OR 1=1 --" OR search_field="admin' OR '1'='1")) | stats count by source_ip, search_field, host
Events longer than 15.000 characters are truncated now.  We wonder if there is a limit for this (so for example in the configuration the maximum event length can't be set to a number higher than 50... See more...
Events longer than 15.000 characters are truncated now.  We wonder if there is a limit for this (so for example in the configuration the maximum event length can't be set to a number higher than 50.000). Where and how can we change this limit for a certain index.
I want to make a dashboard in dashboard studio which in middle there will be a world map and it will be surrounded by small panels. is it possible if yes then can you provide json code for that
Collect two logs with the Universal Forwarder. One log is collected well, but one log is not collected. Can you give me some advice on this phenomenon? The input.conf configuration file. [monit... See more...
Collect two logs with the Universal Forwarder. One log is collected well, but one log is not collected. Can you give me some advice on this phenomenon? The input.conf configuration file. [monitor://D:\Log\State\...\*.Log] disabled = false index = cds_STW112 sourcetype = mujin_CDS_IPA_Log_State ignoreOlderThan = 1h >>>>Not collecting [monitor://D:\Log\Communication\DeviceNet\Input\...\*Input*.Log] disabled = false index = cds_STW112 sourcetype = mujin_CDS_DNetLog_IN ignoreOlderThan = 1h >>>>Collecting
I want to fill the table cells with tags. Like multi selector.. How can I make the table contents look like tags? How to do it without using html elements?? 
Hello Everyone, I would want to ask a question, is there any way for main search get the index return from subsearch?  Since the subsearch will be execute first. The results in the datamodel may ret... See more...
Hello Everyone, I would want to ask a question, is there any way for main search get the index return from subsearch?  Since the subsearch will be execute first. The results in the datamodel may return different indexes some of my example: index=[|datamodel Tutorial Client_errors index | return index]
Hi Team, I need to create 3 calculated fields | eval action= case(error="invalid credentials", "failure", ((like('request.path',"auth/ldap/login/%") OR like('request.path',"auth/ldapco/login/%")... See more...
Hi Team, I need to create 3 calculated fields | eval action= case(error="invalid credentials", "failure", ((like('request.path',"auth/ldap/login/%") OR like('request.path',"auth/ldapco/login/%")) AND (valid="Success")) OR (like('request.path',"auth/token/lookup-self") AND ('auth.display_name'="root")) ,"success") | eval app= case(action="success" OR action="failure", "appname_Authentication") | eval valid= if(error="invalid credentials","Error","Success") action field is dependant on valid app field is dependant on action I am unable to see app field in the splunk, may I know how to create it?
Hey, I had discovered you can emulate the mvexpand function to avoid its limitation configured by the limits.conf  You just have to stats by the multivalue field you were trying to mvexpand, like s... See more...
Hey, I had discovered you can emulate the mvexpand function to avoid its limitation configured by the limits.conf  You just have to stats by the multivalue field you were trying to mvexpand, like so:     | stats values(*) AS * by <multivalue_field>     That's it, (edit:) assuming each value is a unique value such as a unique identifier. You can make values unique using methods like foreach to pre-append a row-based number to each value, reverse join it, then use split and mvindex to remove the row numbers afterwards. (/Edit.) Stats splits up <multivalue_field> into its individual rows, and the use of values(*) copies data across all rows. As an added measure, you can make sure to avoid unnecessary _raw data to reduce memory use with an explicit fields just for it. It was in my experience, it turned out using | fields _time, * trick does not actually remove every single Splunk internal fields. Removing _raw had to be explicit.     | fields _time, xxx, yyy, zzz, <multivalue_field> | fields - _raw | stats values(*) AS * by <multivalue_field>     The above strategy minimizes your search's disk space as much as possible before expanding the multivalue field.
This evening decided to setup a test Splunk box in my lab to goof around with.  Been a while since I have done this part of the process. (Work cluster is up and going and has been for years now).  As... See more...
This evening decided to setup a test Splunk box in my lab to goof around with.  Been a while since I have done this part of the process. (Work cluster is up and going and has been for years now).  As I was looking at my local test box, I noticed the hard drive was not likely the best size to use. So since I have a syslog server running on this as well, and I am pulling those files into Splunk (Splunk will not always be running, hence not sending data direct to Splunk), wanted to try doing a line level destructive read. I did see where others were using a monitor and deleting a file on ingestion, but did not see if line level was being done. So, question is, has anyone done that, and if so, do you have some hints or pointers? Thanks
Trying to understand what is the time field after tstats. We have the _time field for every event, thats how tstats finds latest event, but what is the latest for a stats that comes after tstats? ... See more...
Trying to understand what is the time field after tstats. We have the _time field for every event, thats how tstats finds latest event, but what is the latest for a stats that comes after tstats? for example | tstats latest(var1) as var1 by var2 var3 | eval var4 = ……….. | stats latest(var4) by var3
Can someone help me to get more idea on Splunk Search Head and how it work's ?