All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The audit log exceeds the limit because Splunk wrote a very long event to the log.  Why that happened is impossible to say without knowing more about the event itself. Or is the question more like "... See more...
The audit log exceeds the limit because Splunk wrote a very long event to the log.  Why that happened is impossible to say without knowing more about the event itself. Or is the question more like "shouldn't Splunk never write events longer than 10,000 characters?"  If so, I don't disagree, but prefer Splunk give me the option (by increasing TRUNCATE) to log all of the event rather than cut off what might otherwise be important data.
Hi @JoshuaJJ , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @MartyJ , it isn't so immediate, I developed a solution with JS and a solution without, this is the solution without: Obviously you can only modify a field in a lookup and not in an index, and u... See more...
Hi @MartyJ , it isn't so immediate, I developed a solution with JS and a solution without, this is the solution without: Obviously you can only modify a field in a lookup and not in an index, and use a KV-Store: <form version="1.1"> <label>Manage All Cases</label> <fieldset submitButton="false" autoRun="false"> <input type="radio" token="resetTokens" searchWhenChanged="true"> <label/> <choice value="reset">Reset Inputs</choice> <choice value="retain">Retain</choice> <default>reset</default> <change> <condition value="reset"> <unset token="_key"/> <unset token="timestamp"/> <unset token="User_Name"/> <unset token="Status"/> <set token="resetTokens">retain</set> </condition> </change> </input> </fieldset> <row> <panel> <input type="dropdown" token="User_Name"> <label>User Name</label> <choice value="*&quot; OR NOT User_Name=&quot;*">All</choice> <prefix>User_Name="</prefix> <suffix>"</suffix> <fieldForLabel>User_Name</fieldForLabel> <fieldForValue>User_Name</fieldForValue> <search> <query> | inputlookup open_cases | dedup User_Name | sort User_Name | table User_Name </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <default>*" OR NOT User_Name="*</default> </input> <input type="dropdown" token="Status"> <label>Status</label> <choice value="*">All</choice> <prefix>Status="</prefix> <suffix>"</suffix> <fieldForLabel>Status</fieldForLabel> <fieldForValue>Status</fieldForValue> <search> <query> | inputlookup open_cases WHERE Status!="Escalation" | dedup Status | sort Status | table Status </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <default>*</default> </input> <table id="master"> <title>Total All Cases = $server_count$</title> <search> <query> | inputlookup my_lookup WHERE $User_Name$ $Status$ | eval Time=strftime(TimeStamp,"%d/%m/%Y %H:%M:%S"), key=_key | table key Time Status User_Name TimeStamp </query> <sampleRatio>1</sampleRatio> <progress> <set token="server_count">$job.resultCount$</set> </progress> <cancelled> <unset token="server_count"/> </cancelled> </search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">row</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <fields>["_key","Time","Status","Notes","User_Name"]</fields> <drilldown> <set token="key">$row.key$</set> <set token="timestamp">$row.TimeStamp$</set> <set token="alertname">$row.Alert_Name$</set> <set token="description">$row.Description$</set> <set token="status">$row.Status$</set> <set token="notes">$row.Notes$</set> <set token="username">$row.User_Name$</set> </drilldown> </table> </panel> </row> <row> <panel> <title>Modify Row</title> <input type="dropdown" token="status_to_update"> <label>Status</label> <default>$status$</default> <search> <query/> </search> <choice value="Closed">Closed</choice> <choice value="Work-in-progress">Work-in-progress</choice> <choice value="Escalation">Escalation</choice> <choice value="Stand-By">Stand-By</choice> </input> <input type="text" token="notes_to_update"> <label>Add Notes</label> <default>$notes$</default> </input> <table id="detail" depends="$key$"> <title>Row to modify</title> <search> <query> | makeresults 1 | eval key="$key$", TimeStamp="$timestamp$", Status="$status_to_update$", Notes="$notes_to_update$", Time=strftime($timestamp$,"%d/%m/%Y %H:%M:%S") | rename username AS User_Name | fields User_Name] | table key Time TimeStamp Status Notes User_Name </query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <fields>_key,Time,Status,Notes,User_Name</fields> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">row</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <set token="status_updated">$row.Status$</set> <set token="notes_updated">$row.Notes$</set> <set token="username_updated">$row.User_Name$</set> </drilldown> </table> </panel> </row> <row> <panel> <table id="detail2" depends="$status_to_update$"> <title>Modified Lookup row</title> <search> <query> | inputlookup my_lookup | eval Status=if(_key="$key$","$status_updated$",Status), Notes=if(_key="$key$","$notes_updated$",Notes), User_Name=if(_key="$key$","$username_updated$",User_Name) | search _key="$key$" | outputlookup open_cases append=true | eval key=_key | collect addtime=true index=summary_alerts | eval Time=strftime(TimeStamp,"%d/%m/%Y %H:%M:%S"), key=_key | table key Time TimeStamp Alert_Name Description Status Notes User_Name </query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <fields>_key,Time,Status,Notes,User_Name</fields> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form> Don't copy my dashboard but see the approach and adapt it to your real case. Ciao. Giuseppe
Thank you so much for the help. I always forget to add fields to the Data Model  
How to map mitre attack content in Splunk Security Essentials? I want to map mitre attack for all of my created alert inside of splunk entreprise
https://docs.splunk.com/Documentation/Splunk/9.2.1/DistSearch/PropagateSHCconfigurationchanges Regarding the etc/passwd changes, my guess would be "don't do it".  I think the encryption of the passw... See more...
https://docs.splunk.com/Documentation/Splunk/9.2.1/DistSearch/PropagateSHCconfigurationchanges Regarding the etc/passwd changes, my guess would be "don't do it".  I think the encryption of the passwords must be redone.  Use the UI for password changes so it replicates across the cluster.
Hi @JoshuaJJ , two topics: is tag a field of your DataModel? You can check this in your DataModel definition. if not, you cannot use it or you have to modify your DataModel fields. if yes, you h... See more...
Hi @JoshuaJJ , two topics: is tag a field of your DataModel? You can check this in your DataModel definition. if not, you cannot use it or you have to modify your DataModel fields. if yes, you have to use the <your_datamodel> prefix befor tag in the WHERE condition: | tstats count FROM datamodel=<data_model>.<root_event> WHERE <data_model>.tag=CA BY _time host Ciao. Giuseppe
Hi @Gil, maybe did ingestion run fine until the 31st of may and stop at the 1st of June? if this is true, check the TIME_FORMAT of your logs: probably you are using an european date format (dd/mm/y... See more...
Hi @Gil, maybe did ingestion run fine until the 31st of may and stop at the 1st of June? if this is true, check the TIME_FORMAT of your logs: probably you are using an european date format (dd/mm/yyyy) and you didn't defined a TIME_FORMAT for your timestamps, so Splunk (that's america!) by default uses the american format (mm/dd/yyyy). This means that you indexed todays log (5th of June) as logs of the 6th of May.. You should force the TIME_FORMAT for that sourcetype in props.conf. Ciao. Giuseppe
Hi @tuts , I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that y... See more...
Hi @tuts , I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
I want to link OpenCTI with Splunk ES to be on top of the threats
Good morning,     I recently created a tag for a set of hosts. For example, CA for all California hosts. Does this take time to populate or show up within my Data Models?  I am running a search sim... See more...
Good morning,     I recently created a tag for a set of hosts. For example, CA for all California hosts. Does this take time to populate or show up within my Data Models?  I am running a search similar to this  | tstats count FROM datamodel=<data_model>.<root_event> WHERE tag=CA BY _time, host, etc....   I have also tried this  | datamodel  <data_model> <root_event> search  | search tag=CA | table _time, host, etc....
Perhaps if you read my suggestion more carefully you would have noticed that I suggested you evaluate a new token and then use that token in the link!
Please I need the method if it is done with you
Ah, very good. Thank you!        
Hello Team, We tried to integrate our Splunk enterprise LB URL using SAML authentication. We gave details such as Entity ID, LB URL, and Reply URL, and they generated metadata (XML), which we then u... See more...
Hello Team, We tried to integrate our Splunk enterprise LB URL using SAML authentication. We gave details such as Entity ID, LB URL, and Reply URL, and they generated metadata (XML), which we then uploaded to Splunk. After configuration, we received the following error. Please find the below SS error FYR. Could you please assist us with the mentioned integration part Please let us know if you need any other information. Regards, Siva.
You have several options 1) The delta command to calculate the difference 2) The autoregress command to copy over value from previous result row and calculate difference manually 3) The streamstat... See more...
You have several options 1) The delta command to calculate the difference 2) The autoregress command to copy over value from previous result row and calculate difference manually 3) The streamstats command to do the same as 2) but in a more complicated way
Hello everyone, can anyone help me with how I can get the difference to the previous value from a device that sends me the total kwh number in order to be able to calculate a consumption per specifi... See more...
Hello everyone, can anyone help me with how I can get the difference to the previous value from a device that sends me the total kwh number in order to be able to calculate a consumption per specified time in the Splunk dashboard? Currently, I am only shown an ever-increasing value. Thank you very much!
Hi @ITWhisperer  This is my code in xml dashboard. In my dashboard some link should be present. So if i click on the link it showing null. So i used below code. Still i am getting null value. ... See more...
Hi @ITWhisperer  This is my code in xml dashboard. In my dashboard some link should be present. So if i click on the link it showing null. So i used below code. Still i am getting null value. <condition field="Link"> <eval token="link">if(isnull($row.URL$),"","https://$row.URL|n$"</eval> <link target="_blank">$row.URL|n$</link> </condition>
Hi, since a couple of days i getting these errors from one of my search heads: "06-05-2024 14:33:35.300 +0200 WARN LineBreakingProcessor [3959599 parsing] - Truncating line because limit of 10000 b... See more...
Hi, since a couple of days i getting these errors from one of my search heads: "06-05-2024 14:33:35.300 +0200 WARN LineBreakingProcessor [3959599 parsing] - Truncating line because limit of 10000 bytes has been exceeded with a line length >= 11513 - data_source="/opt/splunk/var/log/splunk/audit.log", data_host="XXX", data_sourcetype="splunk_audit"" As far as i understood, i can set truncate value within the props.conf to a higher value. I just want to understand, why internal logs exceeds the line length. Can someone point me in the right direction why the audit logs exceeds this limit? thanks
hello, I have a problem that I'm not receiving data to some of my indexes when it is related to monitoring.  for the monitor I created an app in the server I pull the data from, it worked for a w... See more...
hello, I have a problem that I'm not receiving data to some of my indexes when it is related to monitoring.  for the monitor I created an app in the server I pull the data from, it worked for a while and now it stopped. the stanza of the inputs.conf looks like that: [monitor://\\<my_server_ip>\<folder>\*.csv] index=<my_index> disabled = 0 ignoreOlderThan = 2d sourcetype = csv source=<source_name>   it happens in 2 indexes of mine that have the same stanza structure. I checked the connection from my server to the monitor path and it was ok. I checked the _internal index for errors with no results. I opened wireshark no see any connections error which i didn't found any errors.   any ideas?