All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@maiks1When I saw another fields values show up in a given field, a sysadmin had changed the order of the logs. This can show up as a new order to the same number of fields, introduction of a new fie... See more...
@maiks1When I saw another fields values show up in a given field, a sysadmin had changed the order of the logs. This can show up as a new order to the same number of fields, introduction of a new field in the log, removal of a field, which changes the order of later fields in the log, etc.. Sourcetype configuration wasn't updated, so it keeps parsing per its definition. Lesson: walk through the whole thing slowly, starting at the beginning. Once you identified what changed, then you can work on why it changed.
Hi @jason_hotchkiss , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Ka... See more...
Hi @jason_hotchkiss , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @Amadou, could you better describe your requirement? in the alert you should insert the conditions to verify. e.g. if you want to chack thet in windows there aren't 10 logfail events (EventCode... See more...
Hi @Amadou, could you better describe your requirement? in the alert you should insert the conditions to verify. e.g. if you want to chack thet in windows there aren't 10 logfail events (EventCode=4625), you could run: index=wineventlog EventCode=4625 | stats count BY host user | where count>10 As I said, the search depends on the conditions to check. Ciao. Giuseppe
I ended up taking the suggestion I received on the slack channels of using the <change> block and setting a token.   <panel> <title>Top 25 by $label$</title> <input type="dropdown" token="... See more...
I ended up taking the suggestion I received on the slack channels of using the <change> block and setting a token.   <panel> <title>Top 25 by $label$</title> <input type="dropdown" token="split_by" searchWhenChanged="true"> <label>Split by</label> <change> <condition value="value1"> <set token="label">MY VALUE 1</set> </condition> <condition value="value2"> <set token="label">MY VALUE 2</set> </condition> <condition value="value3"> <set token="label">MY VALUE 3</set> </condition> </change> <choice value="value1">MY VALUE 1</choice> <choice value="value2">MY VALUE 2</choice> <choice value="value3">MY VALUE 3</choice> <default>value1</default> <initialValue>value1</initialValue> <fieldForLabel>split_by</fieldForLabel> <fieldForValue>split_by</fieldForValue> </input> <chart> <search> <query></query> </search> </chart> </panel>  
Hi, Thanks for answering my questions. Since I can update transform.conf myself, I only need to an admin to create collections.conf, correct? Thanks again
@PickleRick    Thank you very much for your reply. And, Sorry for asking so many questions. I looked into the backfill range and found that it is a value related to the load of the system. Is it... See more...
@PickleRick    Thank you very much for your reply. And, Sorry for asking so many questions. I looked into the backfill range and found that it is a value related to the load of the system. Is it summarized separately from the previously set summary range? 1. The summary range is 1 month, and if the summary is stopped due to system load in the middle, it is replaced with a backfill range. (= Use as backfill range if summary range does not work) 2. The summary range is 1 month, and after summarizing all of the data for one month, the next 7 days of data are filled as a backfill range. (= operates as summary range + backfill range) Which of the two concepts is more accurate?
I have results, but the problem is that it displays users who don't have an IP address (so it shows users from index1 even if no match was found in index2). What I would like is for it to fetch and d... See more...
I have results, but the problem is that it displays users who don't have an IP address (so it shows users from index1 even if no match was found in index2). What I would like is for it to fetch and display only users if the IP addresses match correctly at the right time. Furthermore, I always have more lines (3000 versus 1000).
I have created my input in DB Connect app directly from the UI of Splunk Enterprise. As I don't have permission for operations related task, I am unable to check that. I have observed that my anothe... See more...
I have created my input in DB Connect app directly from the UI of Splunk Enterprise. As I don't have permission for operations related task, I am unable to check that. I have observed that my another input also index the data events twice. Not sure if these duplication issue is because of the fetch size which we give it in the input schedule (By the way I have left them blank to apply with default settings)
Hello,  I just want to know before creating an alert how to find the keywords inside that will compose your alert? please answer with and example. Thank you so much.
Hello,  Apache agent was configured and when trying to run the ./install.sh or httpd -t -D DUMP_INCLUDES We get the below error. Has anyone had this before? httpd : could not open configuration... See more...
Hello,  Apache agent was configured and when trying to run the ./install.sh or httpd -t -D DUMP_INCLUDES We get the below error. Has anyone had this before? httpd : could not open configuration file /scratch/syseng/workspace/Apache.../apache/stage/install/conf/httpd.conf no such file or directory (btw there is no scratch file system at all)
Thanks Rajesh for your prompt response! 1. We use containerd-shim-runc as the CRI. Not sure if that is compatible with Docker runtime.  2. I couldn't find any docker process running on the worker... See more...
Thanks Rajesh for your prompt response! 1. We use containerd-shim-runc as the CRI. Not sure if that is compatible with Docker runtime.  2. I couldn't find any docker process running on the worker node, so could'nt execute Docker related cmds 3.  Is there a dependency with Docker.  As we have a similar setup of the machine agent on a EKS 1.23 without Docker functioning as expected. 
Hi @m92, Spunk isn't a database, so avoid to use the join command because it's a very slow command, in addition you divided your search in three levels adding more slowness, so try to correlate ev... See more...
Hi @m92, Spunk isn't a database, so avoid to use the join command because it's a very slow command, in addition you divided your search in three levels adding more slowness, so try to correlate events using stats BY the correlation key, something like this (to adapt to your use case): (index="index1" Users =* IP=*) OR (index="index2" tag=1 ) | regex Users!="^AAA-[0-9]{5}\$" | eval IP=if(match(IP, "^::ffff:"), replace(IP, "^::ffff:(\d+\.\d+\.\d+\.\d+)$", "\1"), IP) | eval ip=coalesce(IP,srcip) | stats values(Users) AS Users earliest(_time) AS earliest latest(_time) AS latest ip | eval earliest=strftime(earliest,"%Y-%m-%d %H:%M:%S"), latest=strftime(latest,"%Y-%m-%d %H:%M:%S") | table Users, ip, earliest latest Ciao. Giuseppe
Hi @shakti, surely the number of CPUs in one of the root causes of your issue, because Splunk requires at least 12 CPUs for Indexers and 16 if you also have ES. Anyway, check the IOPS (using a tool... See more...
Hi @shakti, surely the number of CPUs in one of the root causes of your issue, because Splunk requires at least 12 CPUs for Indexers and 16 if you also have ES. Anyway, check the IOPS (using a tool e.g. like Bonnie++ or FIO), because this is the usual major issue in queue problems. Ciao. Giuseppe
Hi all! I'm currently trying to create a RDP session analysis dashboard.  I'm using sysmon eventlogs, specifically Event-ID "3" to create a SPL query that shows traffic on port 3389 with a few filte... See more...
Hi all! I'm currently trying to create a RDP session analysis dashboard.  I'm using sysmon eventlogs, specifically Event-ID "3" to create a SPL query that shows traffic on port 3389 with a few filters. I only want to see usernames that are existing in the windows domain.   index=windows source=sysmon DestinationPort=3389 EventCode=3 Image!="C:\Program Files\RANDOMAPP*" | rename User as SourceUser | search SourceUser!="NT AUTHORITY\NETWORK SERVICE" SourceUser!="NT-AUTHORITY\Network Service" SourceUser!="NT-AUTHORITY\SYSTEM" | stats count by SourceUser Image SourceHostname DestinationHostname SourceIp DestinationIp DestinationPort | sort - count   Since last week, the "Image" value has unexpectedly started appearing in the "User" field in all events. Why is this happening and how can I prevent it from appearing in the "User" field?    When I add the following query, no events are displayed. Could it be that the "User" field is getting mixed up with the "Image" field?   User!=Windows\* User!="Program Files*"   Also, if you check the events, you can see 2 events being displayed for “User” Sorry for the bazilion questions, but I'm starting to get a bit frustrated here Thanks in advance for your help and have a great day!      
Hello Splunkers, I'm new to Splunk and I'm stuck; I'm getting more data than I'm supposed to. Users are showing up when they shouldn't, and vice versa. The purpose of the query is to determine which... See more...
Hello Splunkers, I'm new to Splunk and I'm stuck; I'm getting more data than I'm supposed to. Users are showing up when they shouldn't, and vice versa. The purpose of the query is to determine which users are accessing the bastion with the tag=1 from the "index2" index. However, there's no information on the users. That's why I'm fetching user data from the "index1" index by performing a join on the IP address. The ultimate goal is to display the results in the following format: Users - IP - _time. It's important to note that IP addresses are dynamic. When I run this command, it returns 1000 lines: `index="index2" tag=1 | table srcip, _time` However, when I run this command, I get a lot more (11000), even though I'm supposed to have the same number since I'm just fetching users from the other index, but I'm not supposed to have any additional lines: index="index1" | search Users =* AND IP=* | fields Users, IP, _time | where NOT match(Users, "^AAA-[0-9]{5}\$") | eval IP=if(match(IP, "^::ffff:"), replace(IP, "^::ffff:(\d+\.\d+\.\d+\.\d+)$", "\1"), IP) | eval ip=IP | table Users, ip, _time | join type=inner ip [ search index="index2" tag=1 | fields srcip, _time | eval ip=srcip | table ip, _time] | table Users, ip, _time Does anyone have a solution?
Also , if i may know what should be the good I/O operations for splunk?  
This is the document you might want to read to understand how Splunk reads the configs. Also @gcusello 's remark about system/local vs. configured in app is valid.
The datamodel accelerated summary is indeed stored in a bucket but it can (but usually isn't) stored on a different piece of storage (in a different directory or on a different volume). But still is ... See more...
The datamodel accelerated summary is indeed stored in a bucket but it can (but usually isn't) stored on a different piece of storage (in a different directory or on a different volume). But still is organized in buckets and each raw data bucket has its corresponding DAS bucket. You're still thinking in terms of just time periods whereas data is stored and rolled by buckets. Buckets can have overlapping time ranges or even one can "contain" another's whole time range. Also, the time range for summaries works a bit differently. The backfill range doesn't produce duplicates. It updates the data (not sure about the gory technical details underneath - maybe it does internally keep some duplicates and just shows the most current generation but effectively it just shows the "current" state) - it's meant as a way to keep the DAS current even if some lagged events are ingested after the initial summary search run had already been done. So don't try to overoptimize too early
@gcusello  Appreciate your reply.... we have indexer clustering environment . However for both indexers and search head we are using only 4 CPU physical cores ..Do  you think that can cause this pro... See more...
@gcusello  Appreciate your reply.... we have indexer clustering environment . However for both indexers and search head we are using only 4 CPU physical cores ..Do  you think that can cause this problem?
Hi @shakti , there's a delay between the event timestamp and the indexing timestamp probably caused by the too high data volume. This could be caused by a queue issue on the Forwarder, by a network... See more...
Hi @shakti , there's a delay between the event timestamp and the indexing timestamp probably caused by the too high data volume. This could be caused by a queue issue on the Forwarder, by a network latency or by a resource provlem (usually storage performance) on your Indexers. You can check queues using a search like the following  index=_internal source=*metrics.log sourcetype=splunkd group=queue | eval name=case(name=="aggqueue","2 - Aggregation Queue", name=="indexqueue", "4 - Indexing Queue", name=="parsingqueue", "1 - Parsing Queue", name=="typingqueue", "3 - Typing Queue", name=="splunktcpin", "0 - TCP In Queue", name=="tcpin_cooked_pqueue", "0 - TCP In Queue") | eval max=if(isnotnull(max_size_kb),max_size_kb,max_size) | eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size) | eval fill_perc=round((curr/max)*100,2) | bin _time span=1m | stats Median(fill_perc) AS "fill_percentage" perc90(fill_perc) AS "90_perc" max(max) AS max max(curr) AS curr by host, _time, name | where (fill_percentage>70 AND name!="4 - Indexing Queue") OR (fill_percentage>70 AND name="4 - Indexing Queue") | sort -_time About resources, did you checked the IOPS of your storage? have the correct number of CPUs? at least, does your network have sufficient bandwidth to support your data volume? Ciao. Giuseppe