All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ouch. This subsearch with "return 10000" hurts me deeply. If this is the order of magnitude of the size of your data, be aware that no browser will render such table correctly. Also, how would you ... See more...
Ouch. This subsearch with "return 10000" hurts me deeply. If this is the order of magnitude of the size of your data, be aware that no browser will render such table correctly. Also, how would you align data in such table where each user has different login time?
We have one index os_linux which has 2 source type and i see props and transform is written . can you help me to understand how its working . linux:audit Linux_os_syslog   props.conf [L... See more...
We have one index os_linux which has 2 source type and i see props and transform is written . can you help me to understand how its working . linux:audit Linux_os_syslog   props.conf [Linux_os_syslog] TIME_PREFIX = ^ TIME_FORMAT = %b %d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 15 SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TRUNCATE = 2048 TZ = US/Eastern Transforms.conf [linux_audit] DEST_KEY = MetaData:Sourcetype REGEX = type=\S+\s+msg=audit FORMAT = sourcetype::linux:audit [auditd_node] REGEX = \snode=(\S+) FORMAT = host::$1 DEST_KEY = MetaData:Host  
No. You cannot read a lookup contents directly using a forwarder. If you want that functionality (I needed it once so that users could "edit" one particular lookup but not any other ones), you need t... See more...
No. You cannot read a lookup contents directly using a forwarder. If you want that functionality (I needed it once so that users could "edit" one particular lookup but not any other ones), you need to read the csv file contents as events into a temporary index and create a scheduled search which will read those events and do | outputlookup at the end. A bit complicated because you have to keep track when you last updated the lookup so you don't overwrite it each time.
Thanks for this answer, using it I was able to resolve my issue of finding out the version of all the Enterprise and UFs my team is responsible to upgrade: index="_internal" source="*metrics.log... See more...
Thanks for this answer, using it I was able to resolve my issue of finding out the version of all the Enterprise and UFs my team is responsible to upgrade: index="_internal" source="*metrics.log*" group=tcpin_connections hostname IN (<your.servernames>) | stats latest(fwdType) latest(version) latest(os) by hostname  
@siv  There are two methods of ingesting:  Upload with Splunk Web: This is a one-time process done manually by the user. (Note that uploading via Splunk Web has a 500 Mb limit on file size.) Moni... See more...
@siv  There are two methods of ingesting:  Upload with Splunk Web: This is a one-time process done manually by the user. (Note that uploading via Splunk Web has a 500 Mb limit on file size.) Monitor from a filesystem with a UF or other forwarder: This method is for on-going ingestion over a period of time and may not require any manual intervention by the user once setup. You will need to create an app with an inputs.conf that specifies the file or path to monitor. [monitor:///opt/test/data/internal_export_local.csv] sourcetype=mycsvsourcetype index=test Create an accompanying props.conf file: [mycsvsourcetype] FIELD_DELIMITER=, FIELD_NAMES=host,source,sourcetype,component   Either create the app directly on the system ingesting the file, or create it on the Deployment Server and deploy it to the system ingesting the file, whether that’s Splunk Enterprise or a system with the Splunk Universal Forwarder installed. Once Splunkd is restarted on that system, Splunk will begin to ingest the new file.   Refer this    https://community.splunk.com/t5/Getting-Data-In/Inputs-conf-a-CSV-File-From-Universal-Forwarder/m-p/520310    https://community.splunk.com/t5/Getting-Data-In/Universal-Forwarder-and-CSV-from-Remote-System/m-p/176700 
The events I got back showed results containing: SAML by user host source sourcetype   the results comes like this:  Apr 20 10:40:5   3 server AuditLog[123456]: 654321 2025-04-21 10:40:53... See more...
The events I got back showed results containing: SAML by user host source sourcetype   the results comes like this:  Apr 20 10:40:5   3 server AuditLog[123456]: 654321 2025-04-21 10:40:53 UTC 12345678911000@domain sessions|login User 12345678911000@ domain successfully logged in
Below is one sanitized raw test event: 2025-03-18 13:03:07.000, ID="484294162", Documentable="No", System="A1234", Group="GSS-27", Environment="3 TEST", Datasource="abcd.test.com", DBMSProduct="MS... See more...
Below is one sanitized raw test event: 2025-03-18 13:03:07.000, ID="484294162", Documentable="No", System="A1234", Group="GSS-27", Environment="3 TEST", Datasource="abcd.test.com", DBMSProduct="MS SQL SERVER", FindingType="Pass", SeverityCode="2 HIGH", SeverityScore="8.0", TestID="0000", TestName="SQL Server must generate audit records when unsuccessful attempts to modify categorized information occur.", TestDescription="Changes in categories of information must be tracked. Without an audit trail, unauthorized access to protected data could go undetected. To aid in diagnosis, it is necessary to keep track of failed attempts in addition to the successful ones. For detailed information on categorizing information, refer to FIPS Publication 199, Standards for Security Categorization of Federal Information and Information Systems, and FIPS Publication 200, Minimum Security Requirements for Federal Information and Information Systems. If auditing the modification of data classifications is not required, this is not applicable.", FindingDescription="Auditing unsuccessful attempts to modify categorized information is set up correctly.", TestResultID="123456789101112131415", RemediationRule="1", RemediationAssignment="N/A", RemediationAnalysis="This test passed. No remediation necessary.", RemediationGuidance="No action required.", ExternalReference="STIG_Reference - SQL6-D0-014000 : STIG_SRG - SRG-APP-000498-DB-000347", VersionLevel="16.0", PatchLevel="0000", Reference="Sample14", VulnerabilityType="CONF", ScanTimestamp="2025-03-18 09:03:07.0000000", FirstExecution="2022-12-06 10:08:32.0000000", LastExecution="2025-03-18 09:03:35.0000000", CurrentScore="Pass", CurrentScoreSince="2022-12-06 10:08:32.0000000", CurrentScoreDays="833", AcknowledgedServiceAccount="No", SecurityAssessmentName="A1234_TEST (MS SQL SERVER)", CollectorID="testcollector", ScanYear="2025", ScanMonth="3", ScanDay="18", ScanCycle="2", Description="A1234;TEST;GSS-27", Host="12345.sample.test.com", Port="1234"
Please share some sample anonymised events so we can better advise you.
Probably the best way to include "missing" times is to use timechart. However, it is difficult to advise how you might use this without seeing your events. Please share your events (anonymised, of co... See more...
Probably the best way to include "missing" times is to use timechart. However, it is difficult to advise how you might use this without seeing your events. Please share your events (anonymised, of course).
@ITWhisperer  I made adjustments as you suggested above and getting below results. Using one system and group sample. Running search for 6 months and need to see 0 for the months with no data/events... See more...
@ITWhisperer  I made adjustments as you suggested above and getting below results. Using one system and group sample. Running search for 6 months and need to see 0 for the months with no data/events. Above adjustments give only the months with data. System Group ScanMonth ScanYear Environment TOTAL 1_LOW 2_MEDIUM 3_HIGH 4_CRITICAL A1234 GSS-27 2 2025 3_TEST 216 2 28 155 31 A1234 GSS-27 3 2025 3_TEST 430 4 56 308 62 A1234 GSS-27 2 2025 4_DEV 222 2 28 161 31 A1234 GSS-27 3 2025 4_DEV 444 4 56 322 62   Needed: System Group ScanMonth ScanYear Environment TOTAL 1_LOW 2_MEDIUM 3_HIGH 4_CRITICAL A6020B GSS-27 1 2025 3_TEST 0 0 0 0 0 A6020B GSS-27 2 2025 3_TEST 216 2 28 155 31 A6020B GSS-27 3 2025 3_TEST 430 4 56 308 62 A6020B GSS-27 1 2025 4_DEV 0 0 0 0 0 A6020B GSS-27 2 2025 4_DEV 222 2 28 161 31 A6020B GSS-27 3 2025 4_DEV 444 4 56 322 62 A6020B GSS-27 10 2024 3_TEST 0 0 0 0 0 A6020B GSS-27 11 2025 3_TEST 0 0 0 0 0 A6020B GSS-27 12 2026 3_TEST 0 0 0 0 0 A6020B GSS-27 10 2027 4_DEV 0 0 0 0 0 A6020B GSS-27 11 2028 4_DEV 0 0 0 0 0 A6020B GSS-27 12 2029 4_DEV 0 0 0 0 0
Good afternoon Splunk Team, I have my search query: index=example_mine  host=x.x.x.x  [ | inputlookup  myfiile.csv | return 10000 $myfile] logins="successfully logged in"  Search was last 7 days. ... See more...
Good afternoon Splunk Team, I have my search query: index=example_mine  host=x.x.x.x  [ | inputlookup  myfiile.csv | return 10000 $myfile] logins="successfully logged in"  Search was last 7 days. I have received returns of everyone who successfully logged in. I need to put the results in a nice table format where X=each user and Y=time. Any help would be appreciated. v/r CMAz
We have a Splunk app that includes multiple scripted inputs. The app is deployed to 15 heavy forwarders, but we want one of the scripts to run on only one of them. I first tried adding host = <host... See more...
We have a Splunk app that includes multiple scripted inputs. The app is deployed to 15 heavy forwarders, but we want one of the scripts to run on only one of them. I first tried adding host = <hostname> inside the scripted‑input stanza, but I now realize that this isn't the solution. Is there a way to restrict a scripted input so it executes on only a single server, without having to split the app?
Hey @livehybrid! Thank you for taking a look at my question. I tried this way and it combines everything (meaning through different sources). I want to keep the sources separate. I replied with a lit... See more...
Hey @livehybrid! Thank you for taking a look at my question. I tried this way and it combines everything (meaning through different sources). I want to keep the sources separate. I replied with a little more information in another reply but I will also put it here. Here is my original query.  index=Basketball | timechart span=1d count by players limit=100 For more information I am counting something that the players did. In the players sources there is about 50 people. When I say I want it individualize, I dont want the total of Player One which is 100 to affect Player Two who has 300. In your query it combines it and the first threshold would be 28.57 VS the two players would have different thresholds. To continue on with the sample data that I presented Player One would have the first threshold be 7.142 and Player two would have the first threshold be 21.43. If this is not possible then I will understand but this is how I want the heatmap to work ideally. Does this make sense? Please let me know. I appreciate and thank you for your help. I hope to hear from you soon! 
There are no defaults for charting.fieldColors.
Hi @gcusello. How are you? I'm the same person who had uploaded the question two years ago. How can I check if we have an Indexer Cluster? The same with the Search Head. About the MC, yes, it is a ... See more...
Hi @gcusello. How are you? I'm the same person who had uploaded the question two years ago. How can I check if we have an Indexer Cluster? The same with the Search Head. About the MC, yes, it is a Monitoring Console. We have a total of 6 servers. 1 of them include all Distributed Search, License Manager, Monitoring Console and SHCD that I don't know exactly what is. Then we have 3 indexers servers 1 server is the Heavy Forwarder The last one is the Search Head. Regards!
Hello @bowesmana! Thank you for responding quickly and apologize for the delay. So this would work if I didnt have the heatmap to go along with it. To put more information, I will include my original... See more...
Hello @bowesmana! Thank you for responding quickly and apologize for the delay. So this would work if I didnt have the heatmap to go along with it. To put more information, I will include my original query.  index=Basketball | timechart span=1d count by players limit=100 The above is my query and when I add what you typed individually it works but when I put it together no results appear. I want to use the calculation that I get from what you typed to put it in my heat graph as thresholds. Does that make more sense? For example, Lets say Lebron has a total 100 one week. once put into the equation, the product would be 7.14. This is the first threshold and it would show if he was below or above that threshold based on the color. Now another week goes by and this time the total was 300. Now the first threshold that was once 7.14 now goes up to 21.43. And the same thing happens. Does that make sense? Please let me know. Thank you for your help once again, I hope to hear from you soon!
Thank you, I must have missed this when looking through the documentation.  To piggy back, is there a more updated hex code list of the default options? 
Please share your current dashboard source (otherwise we have no idea what you might have done wrong!)
Assuming you want all possible combinations of System Group and Environment for each of the possible ScanMonth and ScanYear already present in your results, you could try something like this index=s... See more...
Assuming you want all possible combinations of System Group and Environment for each of the possible ScanMonth and ScanYear already present in your results, you could try something like this index=sample_index sourcetype=sample_sourcetype AcknowledgedServiceAccount="No" System="ABC" | eval ScanMonth_Translate=case( ScanMonth="1","January", ScanMonth="2","February", ScanMonth="3","March", ScanMonth="4","April", ScanMonth="5","May", ScanMonth="6","June", ScanMonth="7","July", ScanMonth="8","August", ScanMonth="9","September", ScanMonth="10","October", ScanMonth="11","November", ScanMonth="12","December") | fields ID, System, GSS, RemediationAssignment, Environment, SeverityCode, ScanYear, ScanMonth | fillnull value="NULL" ID, System, GSS, RemediationAssignment, Environment, SeverityCode, ScanYear, ScanMonth | foreach System Group Environment ScanMonth, ScanYear, SeverityCode [| eval <<FIELD>> = split(<<FIELD>>, "\n") | eval <<FIELD>> = split(<<FIELD>>, "\n") | eval <<FIELD>> = split(<<FIELD>>, "\n") | eval <<FIELD>> = split(<<FIELD>>, "\n") | eval <<FIELD>> = split(<<FIELD>>, "\n") | eval <<FIELD>> = split(<<FIELD>>, "\n") ] | stats count AS Total_Vulnerabilities BY ScanMonth, ScanYear, System, Group, Environment, SeverityCode | fields System, Group, ScanMonth, ScanYear, Environment, SeverityCode, Total_Vulnerabilities | stats values(eval(if(SeverityCode="1 CRITICAL",Total_Vulnerabilities, null()))) as "4_CRITICAL" values(eval(if(SeverityCode="2 HIGH",Total_Vulnerabilities, null()))) as "3_HIGH" values(eval(if(SeverityCode="3 MEDIUM",Total_Vulnerabilities, null()))) AS "2_MEDIUM" values(eval(if(SeverityCode="4 LOW",Total_Vulnerabilities, null()))) as "1_LOW" sum(Total_Vulnerabilities) AS TOTAL by System, Group, ScanMonth, ScanYear, Environment | fillnull value="0" 4_CRITICAL, 3_HIGH, 2_MEDIUM, 1_LOW | fields System, Group, Environment, ScanMonth, ScanYear, 4_CRITICAL, 3_HIGH, 2_MEDIUM, 1_LOW, TOTAL | replace "*PROD*" WITH "1_PROD" IN Environment | replace "*DR*" WITH "2_DR" IN Environment | replace "*TEST*" WITH "3_TEST" IN Environment | replace "*DEV*" WITH "4_DEV" IN Environment | sort 0 + System, GSS, Environment, ScanMonth, ScanYear | appendpipe [| stats values(System) as System values(Group) as Group values(Environment) as Environment by ScanMonth ScanYear | eventstats values(System) as System values(Group) as Group values(Environment) as Environment | mvexpand System | mvexpand Group | mvexpand Environment | fillnull value="0" 4_CRITICAL, 3_HIGH, 2_MEDIUM, 1_LOW, TOTAL ] | stats sum(TOTAL) AS TOTAL sum(1_LOW) AS 1_LOW sum(2_MEDIUM) AS 2_MEDIUM sum(3_HIGH) AS 3_HIGH sum(4_CRITICAL) AS 4_CRITICAL by System, Group, ScanMonth, ScanYear, Environment | sort 0 + System, Group, Environment, ScanMonth, ScanYear
Where is the input defined? You should be able to disable it where it is defined. If you have access to the command line on the machine, do: splunk btool inputs list --debug | fgrep "<the input n... See more...
Where is the input defined? You should be able to disable it where it is defined. If you have access to the command line on the machine, do: splunk btool inputs list --debug | fgrep "<the input name>"  Where the input is defined, you can go to the config file and delete the input, or disable it.