All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @siv  If you have a CSV on a forwarder that you want to become a lookup in Splunk then the best way to achieve this is probably to monitor (using monitor:// in inputs.conf) the file and send it t... See more...
Hi @siv  If you have a CSV on a forwarder that you want to become a lookup in Splunk then the best way to achieve this is probably to monitor (using monitor:// in inputs.conf) the file and send it to a specific index on your Splunk indexers. Then, Create scheduled search which searches that index and retrieves the sent data and outputs it to a lookup (using | outputlookup command). Depending on how/when the CSV is updated may depend on exactly how the resulting search ends up, but ultimately this should be a viable solution. There may be other solutions but would require significantly more engineering effort.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @danielbb  I think you need to look at how this is deployed to each of the 15 HFs, ultimately you would have to make *something* different on one of them in order for it to know which one to run ... See more...
Hi @danielbb  I think you need to look at how this is deployed to each of the 15 HFs, ultimately you would have to make *something* different on one of them in order for it to know which one to run the input.  How are you deploying the app to the 15 HFs? Deployment Server? Ansible?  Each HF operates independently and not as part of a cluster, they arent aware of eachother and there is no leader or anything like that which could be used to determine a particular role.  If you are deploying via Ansible then you could use a templated inputs.conf to toggle the disabled flag on the input, but it really depends on your architecture and deployment approach. Please let us know so we can help further.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Sorry for the late response.  We opted to stay with using Apache as an SSL proxy to pass the user's credentials to Splunk.  
Hi @avikc100  You can create a search that calculates the relevant dates which set tokens and then use the tokens: <search id="days"> <query>| makeresults | eval dayMinus0=strftime(now(), "%... See more...
Hi @avikc100  You can create a search that calculates the relevant dates which set tokens and then use the tokens: <search id="days"> <query>| makeresults | eval dayMinus0=strftime(now(), "%d/%m/%Y") | eval dayMinus1=strftime(now()-86400, "%d/%m/%Y") | eval dayMinus2=strftime(now()-(86400*2), "%d/%m/%Y") | eval dayMinus3=strftime(now()-(86400*3), "%d/%m/%Y") | eval dayMinus4=strftime(now()-(86400*4), "%d/%m/%Y") | eval dayMinus5=strftime(now()-(86400*5), "%d/%m/%Y")</query> <done> <set token="dayMinus0">$result.dayMinus0$</set> <set token="dayMinus1">$result.dayMinus1$</set> <set token="dayMinus2">$result.dayMinus2$</set> <set token="dayMinus3">$result.dayMinus3$</set> <set token="dayMinus4">$result.dayMinus4$</set> <set token="dayMinus5">$result.dayMinus5$</set> </done> </search> Then use $dayMinusN$ for each Title - where N is the number of days, like this:   Below is the full XML example of that dashboard above for you to play with if it helps: <dashboard version="1.1" theme="light"> <label>SplunkAnswers1</label> <search id="days"> <query>| makeresults | eval dayMinus0=strftime(now(), "%d/%m/%Y") | eval dayMinus1=strftime(now()-86400, "%d/%m/%Y") | eval dayMinus2=strftime(now()-(86400*2), "%d/%m/%Y") | eval dayMinus3=strftime(now()-(86400*3), "%d/%m/%Y") | eval dayMinus4=strftime(now()-(86400*4), "%d/%m/%Y") | eval dayMinus5=strftime(now()-(86400*5), "%d/%m/%Y")</query> <done> <set token="dayMinus0">$result.dayMinus0$</set> <set token="dayMinus1">$result.dayMinus1$</set> <set token="dayMinus2">$result.dayMinus2$</set> <set token="dayMinus3">$result.dayMinus3$</set> <set token="dayMinus4">$result.dayMinus4$</set> <set token="dayMinus5">$result.dayMinus5$</set> </done> </search> <search id="baseTest"> <query>|tstats count where index=_internal by _time, host span=1d | eval daysAgo=floor((now()-_time)/86400)</query> <earliest>-7d@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <row> <panel> <table> <title>$dayMinus0$</title> <search base="baseTest"> <query>| where daysAgo=0 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel> <table> <title>$dayMinus1$</title> <search base="baseTest"> <query>| where daysAgo=1 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel> <table> <title>$dayMinus2$</title> <search base="baseTest"> <query>| where daysAgo=2 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel> <table> <title>$dayMinus3$</title> <search base="baseTest"> <query>| where daysAgo=3 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel> <table> <title>$dayMinus4$</title> <search base="baseTest"> <query>| where daysAgo=4 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> <panel> <table> <title>$dayMinus5$</title> <search base="baseTest"> <query>| where daysAgo=5 | table host count</query> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </dashboard>  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I want to replace hard coded text "Today" by current system date in splunk report. Please help if it is possible. Please see the attachment.
♪♫♬ And they say that a hero could saaaaaaave us I'm not gonna stand here and waaaaaaaait ♪♫♬
@hemant_lnu wrote: We have one index os_linux which has 2 source type and i see props and transform is written . can you help me to understand how its working . linux:audit Linux_os_syslog ... See more...
@hemant_lnu wrote: We have one index os_linux which has 2 source type and i see props and transform is written . can you help me to understand how its working . linux:audit Linux_os_syslog   props.conf [Linux_os_syslog] TIME_PREFIX = ^ Tells Splunk to look for the event timestamp at the beginning of the event TIME_FORMAT = %b %d %H:%M:%S Tells Splunk what a timestamp looks like MAX_TIMESTAMP_LOOKAHEAD = 15 How far from TIME_PREFIX the timestamp is allowed to be SHOULD_LINEMERGE = false Don't combine lines LINE_BREAKER = ([\r\n]+) Events break after a newline (CR and/or LF) TRUNCATE = 2048 Cut off each event after 2048 characters TZ = US/Eastern Event timestamps are expected to be in this time zone Transforms.conf [linux_audit] DEST_KEY = MetaData:Sourcetype REGEX = type=\S+\s+msg=audit FORMAT = sourcetype::linux:audit Look for "type=", some text followed by white space, then "msg=audit".  If it's found, set the sourcetype field to "linux:audit" [auditd_node] REGEX = \snode=(\S+) FORMAT = host::$1 DEST_KEY = MetaData:Host Look for "node=" in each event and set the 'host' field to the word that follows it.
Nope. If you're pushing an app with enabled input to 15 forwarders you're getting an enabled input on each of them. The typical way to handle it is to define the input as disabled within the main app... See more...
Nope. If you're pushing an app with enabled input to 15 forwarders you're getting an enabled input on each of them. The typical way to handle it is to define the input as disabled within the main app and push it to all forwarders and create a small app which overwrites input's state to enabled and push this app to just one forwarder.
For putting values in a nice xy-table you can use either chart command or xyseries but... You have only X and Y. You don't have values which you'd put into the table.
I can change the return, and the time. I just need a syntax to create a table where y=time and X=saml user
Ouch. This subsearch with "return 10000" hurts me deeply. If this is the order of magnitude of the size of your data, be aware that no browser will render such table correctly. Also, how would you ... See more...
Ouch. This subsearch with "return 10000" hurts me deeply. If this is the order of magnitude of the size of your data, be aware that no browser will render such table correctly. Also, how would you align data in such table where each user has different login time?
We have one index os_linux which has 2 source type and i see props and transform is written . can you help me to understand how its working . linux:audit Linux_os_syslog   props.conf [L... See more...
We have one index os_linux which has 2 source type and i see props and transform is written . can you help me to understand how its working . linux:audit Linux_os_syslog   props.conf [Linux_os_syslog] TIME_PREFIX = ^ TIME_FORMAT = %b %d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 15 SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TRUNCATE = 2048 TZ = US/Eastern Transforms.conf [linux_audit] DEST_KEY = MetaData:Sourcetype REGEX = type=\S+\s+msg=audit FORMAT = sourcetype::linux:audit [auditd_node] REGEX = \snode=(\S+) FORMAT = host::$1 DEST_KEY = MetaData:Host  
No. You cannot read a lookup contents directly using a forwarder. If you want that functionality (I needed it once so that users could "edit" one particular lookup but not any other ones), you need t... See more...
No. You cannot read a lookup contents directly using a forwarder. If you want that functionality (I needed it once so that users could "edit" one particular lookup but not any other ones), you need to read the csv file contents as events into a temporary index and create a scheduled search which will read those events and do | outputlookup at the end. A bit complicated because you have to keep track when you last updated the lookup so you don't overwrite it each time.
Thanks for this answer, using it I was able to resolve my issue of finding out the version of all the Enterprise and UFs my team is responsible to upgrade: index="_internal" source="*metrics.log... See more...
Thanks for this answer, using it I was able to resolve my issue of finding out the version of all the Enterprise and UFs my team is responsible to upgrade: index="_internal" source="*metrics.log*" group=tcpin_connections hostname IN (<your.servernames>) | stats latest(fwdType) latest(version) latest(os) by hostname  
@siv  There are two methods of ingesting:  Upload with Splunk Web: This is a one-time process done manually by the user. (Note that uploading via Splunk Web has a 500 Mb limit on file size.) Moni... See more...
@siv  There are two methods of ingesting:  Upload with Splunk Web: This is a one-time process done manually by the user. (Note that uploading via Splunk Web has a 500 Mb limit on file size.) Monitor from a filesystem with a UF or other forwarder: This method is for on-going ingestion over a period of time and may not require any manual intervention by the user once setup. You will need to create an app with an inputs.conf that specifies the file or path to monitor. [monitor:///opt/test/data/internal_export_local.csv] sourcetype=mycsvsourcetype index=test Create an accompanying props.conf file: [mycsvsourcetype] FIELD_DELIMITER=, FIELD_NAMES=host,source,sourcetype,component   Either create the app directly on the system ingesting the file, or create it on the Deployment Server and deploy it to the system ingesting the file, whether that’s Splunk Enterprise or a system with the Splunk Universal Forwarder installed. Once Splunkd is restarted on that system, Splunk will begin to ingest the new file.   Refer this    https://community.splunk.com/t5/Getting-Data-In/Inputs-conf-a-CSV-File-From-Universal-Forwarder/m-p/520310    https://community.splunk.com/t5/Getting-Data-In/Universal-Forwarder-and-CSV-from-Remote-System/m-p/176700 
The events I got back showed results containing: SAML by user host source sourcetype   the results comes like this:  Apr 20 10:40:5   3 server AuditLog[123456]: 654321 2025-04-21 10:40:53... See more...
The events I got back showed results containing: SAML by user host source sourcetype   the results comes like this:  Apr 20 10:40:5   3 server AuditLog[123456]: 654321 2025-04-21 10:40:53 UTC 12345678911000@domain sessions|login User 12345678911000@ domain successfully logged in
Below is one sanitized raw test event: 2025-03-18 13:03:07.000, ID="484294162", Documentable="No", System="A1234", Group="GSS-27", Environment="3 TEST", Datasource="abcd.test.com", DBMSProduct="MS... See more...
Below is one sanitized raw test event: 2025-03-18 13:03:07.000, ID="484294162", Documentable="No", System="A1234", Group="GSS-27", Environment="3 TEST", Datasource="abcd.test.com", DBMSProduct="MS SQL SERVER", FindingType="Pass", SeverityCode="2 HIGH", SeverityScore="8.0", TestID="0000", TestName="SQL Server must generate audit records when unsuccessful attempts to modify categorized information occur.", TestDescription="Changes in categories of information must be tracked. Without an audit trail, unauthorized access to protected data could go undetected. To aid in diagnosis, it is necessary to keep track of failed attempts in addition to the successful ones. For detailed information on categorizing information, refer to FIPS Publication 199, Standards for Security Categorization of Federal Information and Information Systems, and FIPS Publication 200, Minimum Security Requirements for Federal Information and Information Systems. If auditing the modification of data classifications is not required, this is not applicable.", FindingDescription="Auditing unsuccessful attempts to modify categorized information is set up correctly.", TestResultID="123456789101112131415", RemediationRule="1", RemediationAssignment="N/A", RemediationAnalysis="This test passed. No remediation necessary.", RemediationGuidance="No action required.", ExternalReference="STIG_Reference - SQL6-D0-014000 : STIG_SRG - SRG-APP-000498-DB-000347", VersionLevel="16.0", PatchLevel="0000", Reference="Sample14", VulnerabilityType="CONF", ScanTimestamp="2025-03-18 09:03:07.0000000", FirstExecution="2022-12-06 10:08:32.0000000", LastExecution="2025-03-18 09:03:35.0000000", CurrentScore="Pass", CurrentScoreSince="2022-12-06 10:08:32.0000000", CurrentScoreDays="833", AcknowledgedServiceAccount="No", SecurityAssessmentName="A1234_TEST (MS SQL SERVER)", CollectorID="testcollector", ScanYear="2025", ScanMonth="3", ScanDay="18", ScanCycle="2", Description="A1234;TEST;GSS-27", Host="12345.sample.test.com", Port="1234"
Please share some sample anonymised events so we can better advise you.
Probably the best way to include "missing" times is to use timechart. However, it is difficult to advise how you might use this without seeing your events. Please share your events (anonymised, of co... See more...
Probably the best way to include "missing" times is to use timechart. However, it is difficult to advise how you might use this without seeing your events. Please share your events (anonymised, of course).
@ITWhisperer  I made adjustments as you suggested above and getting below results. Using one system and group sample. Running search for 6 months and need to see 0 for the months with no data/events... See more...
@ITWhisperer  I made adjustments as you suggested above and getting below results. Using one system and group sample. Running search for 6 months and need to see 0 for the months with no data/events. Above adjustments give only the months with data. System Group ScanMonth ScanYear Environment TOTAL 1_LOW 2_MEDIUM 3_HIGH 4_CRITICAL A1234 GSS-27 2 2025 3_TEST 216 2 28 155 31 A1234 GSS-27 3 2025 3_TEST 430 4 56 308 62 A1234 GSS-27 2 2025 4_DEV 222 2 28 161 31 A1234 GSS-27 3 2025 4_DEV 444 4 56 322 62   Needed: System Group ScanMonth ScanYear Environment TOTAL 1_LOW 2_MEDIUM 3_HIGH 4_CRITICAL A6020B GSS-27 1 2025 3_TEST 0 0 0 0 0 A6020B GSS-27 2 2025 3_TEST 216 2 28 155 31 A6020B GSS-27 3 2025 3_TEST 430 4 56 308 62 A6020B GSS-27 1 2025 4_DEV 0 0 0 0 0 A6020B GSS-27 2 2025 4_DEV 222 2 28 161 31 A6020B GSS-27 3 2025 4_DEV 444 4 56 322 62 A6020B GSS-27 10 2024 3_TEST 0 0 0 0 0 A6020B GSS-27 11 2025 3_TEST 0 0 0 0 0 A6020B GSS-27 12 2026 3_TEST 0 0 0 0 0 A6020B GSS-27 10 2027 4_DEV 0 0 0 0 0 A6020B GSS-27 11 2028 4_DEV 0 0 0 0 0 A6020B GSS-27 12 2029 4_DEV 0 0 0 0 0