All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi  Ive just tried and it downloaded without an issue for me, see below:      Is there a firewall between your machine and the Splunk download website? Im wondering if this could be causi... See more...
Hi  Ive just tried and it downloaded without an issue for me, see below:      Is there a firewall between your machine and the Splunk download website? Im wondering if this could be causing issues?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing @Waitomo
I'm trying to download Splunk using "wget -O splunk-9.4.2-e9664af3d956.x86_64.rpm "https://download.splunk.com/products/splunk/releases/9.4.2/linux/splunk-9.4.2-e9664af3d956.x86_64.rpm"" and it's han... See more...
I'm trying to download Splunk using "wget -O splunk-9.4.2-e9664af3d956.x86_64.rpm "https://download.splunk.com/products/splunk/releases/9.4.2/linux/splunk-9.4.2-e9664af3d956.x86_64.rpm"" and it's hanging at 35%. Was wondering if this is known issue.
Same issue here few weeks ago, but already solved. After spending considerable time on this issue, as I believe there is a documentation gap. To maintain a consistent documentation path within the S... See more...
Same issue here few weeks ago, but already solved. After spending considerable time on this issue, as I believe there is a documentation gap. To maintain a consistent documentation path within the Splunk 9.4.1 upgrade process, I suggested adding the following link as a reference for Deployment Server: https://docs.splunk.com/Documentation/Splunk/9.4.1/Updating/Upgradepre-9.2deploymentservers This should be included under the READTHISFIRST section for versions later than 9.2: https://docs.splunk.com/Documentation/Splunk/9.4.1/Installation/AboutupgradingREADTHISFIRST  
So in terms of "the sourcetype mydevice:clone is also indexed on my local indexer" - you have cloned it but because it still has _TCP_ROUTING=local_indexers it will also index on the local indexers. ... See more...
So in terms of "the sourcetype mydevice:clone is also indexed on my local indexer" - you have cloned it but because it still has _TCP_ROUTING=local_indexers it will also index on the local indexers.  How come are you are a secondary Splunk server via syslog instead of Splunk2Splunk? If you dont want to send the cloned sourcetype to local indexers then you need to use another transform to set "_TCP_ROUTING=" (No value) as well as setting your syslog routing in the other transforms.c  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
hi @woodcock can you please point me to right direction )
Hi @yuanliu , can you please check, what you suggested. IT is different from what i am looking for :-). I have put your code and checked to be sure, maybe i missed something. for example in filter ... See more...
Hi @yuanliu , can you please check, what you suggested. IT is different from what i am looking for :-). I have put your code and checked to be sure, maybe i missed something. for example in filter try to select all values with "bundle". So, we have four matched values. How can i select these four values by one click?   
Hi @msarkaus  It looks like you have multiple events with the same content in then? If you have 1000s of events you should probably use something like stats to group them up: | stats count by Latit... See more...
Hi @msarkaus  It looks like you have multiple events with the same content in then? If you have 1000s of events you should probably use something like stats to group them up: | stats count by Latitude Longitude WarningMessages  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @livehybrid  Yes in my meening, setting a _TCP_ROUTING on the input will forward the logs on the local indexers. And, at the sametime, sourcetype is cloned to be forwarded to the second splu... See more...
Hi @livehybrid  Yes in my meening, setting a _TCP_ROUTING on the input will forward the logs on the local indexers. And, at the sametime, sourcetype is cloned to be forwarded to the second splunk. Maybe I didn't understand something
Hi @Nicolas2203  It looks like you are setting _TCP_ROUTING to your local indexers in the input but then do not change it on the cloned data, you are setting _SYSLOG_ROUTING but the TCP_ROUTING is s... See more...
Hi @Nicolas2203  It looks like you are setting _TCP_ROUTING to your local indexers in the input but then do not change it on the cloned data, you are setting _SYSLOG_ROUTING but the TCP_ROUTING is still also sending to the local indexers.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thank you for your response! This solution worked. I was not aware of the 10-result limit with map.
For customers hitting Cannot register new_channel error regardless of persistent queue at IF,  applying 9.4.x/9.3.2/9.2.4/9.1.7 and above should fix the issue or reduce the chance of events enteri... See more...
For customers hitting Cannot register new_channel error regardless of persistent queue at IF,  applying 9.4.x/9.3.2/9.2.4/9.1.7 and above should fix the issue or reduce the chance of events entering  into splunkcloud DLQ.
Hi @ws  The clean command does not work on clustered indexes - See https://docs.splunk.com/Documentation/Splunk/latest/Indexer/RemovedatafromSplunk#How_to_delete:~:text=Note%3A-,The,-clean%20command... See more...
Hi @ws  The clean command does not work on clustered indexes - See https://docs.splunk.com/Documentation/Splunk/latest/Indexer/RemovedatafromSplunk#How_to_delete:~:text=Note%3A-,The,-clean%20command%20does As the others have said, you could reduce the retention to basically nothing so that the data ages out, before then removing the indexes.conf stanza for the index and deploying out to your indexers. However note that this will not remove the old directory structure on the indexers for this index, if you want to completely remove it you will need to delete the folder structure on each node as per the docs "Once you've applied the indexes.conf changes and the peer nodes have restarted, remove the index's directories from each peer node."  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @dlevesque1  Have you created this through the "Content Management" section within ES or is this a correlation search that you have created with the notable alert action? Ensure that you are crea... See more...
Hi @dlevesque1  Have you created this through the "Content Management" section within ES or is this a correlation search that you have created with the notable alert action? Ensure that you are creating from within the Content Management section if not already. Which version of ES are you using?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Casial06  Firstly I think you should be able to achieve this with a stats rather than a join, I'll show an example below. The other thing to consider is that using a span of 4 hours might cause ... See more...
Hi @Casial06  Firstly I think you should be able to achieve this with a stats rather than a join, I'll show an example below. The other thing to consider is that using a span of 4 hours might cause incorrect matchings depending on when within the 4 hour span the activity occurs. For example, if an account is locked at 11:50 and unlocked at 12:05, the 4 hour span might split into 08:00-12:00 and 12:00-16:00 - meaning that the lock and unlock are captured in different spans.  Instead you could look at just checking if there has been a lock since the last unlock, or no unlocks. Check the following and see if its useful, I've generated some sample data to work through some scenarios:   | makeresults format=csv data="_time,Account_Name,EventCode,host,Workstation_Name,src_ip 2025-04-12T08:00:00Z,Acct1,4740,hostA,workA,10.1.1.1 2025-04-12T09:00:00Z,Acct1,4740,hostB,workB,10.1.1.1 2025-04-12T13:00:00Z,Acct1,4767,hostB,workB,10.1.1.1 2025-04-12T08:10:00Z,Acct2,4740,hostC,workC,10.2.2.2 2025-04-12T09:12:00Z,Acct2,4740,hostC,workD,10.2.2.2 2025-04-12T14:15:00Z,Acct2,4740,hostE,workD,10.2.2.2 2025-04-12T10:00:00Z,Acct3,4740,hostD,workF,10.3.3.3 2025-04-12T15:00:00Z,Acct3,4767,hostD,workG,10.3.3.3 2025-04-12T11:00:00Z,Acct4,4740,hostG,workH,10.4.4.4 2025-04-12T15:00:00Z,Acct4,4767,hostG,workH,10.4.4.4 2025-04-12T13:00:00Z,Acct5,4740,hostH,workI,10.5.5.5 2025-04-12T14:00:00Z,Acct1,4740,hostA,workA,10.1.1.1" | eval _time=strptime(_time,"%Y-%m-%dT%H:%M:%SZ") | eval UnlockTime=IF(EventCode=4767,_time,null()) | eval LockTime=IF(EventCode=4740,_time,null()) | stats earliest(LockTime) as firstLockTime, latest(LockTime) as lastLockTime, latest(UnlockTime) as lastUnlockTime, range(_time) as timeRange, count(eval(EventCode=4740)) as Locked, count(eval(EventCode=4767)) as Unlocked by Account_Name | where lastLockTime>lastUnlockTime OR isnull(lastUnlockTime)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, I am trying to create a notable event in the mission control area within Enterprise Security to capture when an index has not received data within 24 hours. This should be simple and straig... See more...
Hello, I am trying to create a notable event in the mission control area within Enterprise Security to capture when an index has not received data within 24 hours. This should be simple and straight forward but I can't seem to figure out why this isn't working.  I have the detection search as  index = <target index> |stats count condition in the alert to trigger is search count = 0 I also have email alerts setup as an additional way to notify the proper people,  this part of the security content works, but why doesn't the actual event appear in the Mission control area? This has me stumped, any help would be greatly appreciated.
Hi Splunk Community, I have a new issue concerning this config, some particular behaviour that I don't understand Here is my configuration #classic input on TCP, this is syslog logs [tcp:/... See more...
Hi Splunk Community, I have a new issue concerning this config, some particular behaviour that I don't understand Here is my configuration #classic input on TCP, this is syslog logs [tcp://22000] sourcetype = mydevice:sourcetype index = local_index _TCP_ROUTING = local_indexers:9997 # Idea is to clone the sourcetype, but not logs containing LAN1 and LAN2 logs, it's not necessary for the second splunk [mydevice-clone] CLONE_SOURCETYPE = mydevice:clone REGEX = ^((?!LAN1|LAN2).)*$ DEST_KEY = _SYSLOG_ROUTING FORMAT = sending_to_second_splunk # on the props I apply the configuration made on the transforms [mydevice:sourcetype] TRANSFORMS-clone = mydevice-clone #IP of the HF that will send data to second splunk [syslog:sending_to_second_splunk] server = 10.10.10.10:port type = tcp   Issue Encountered This configuration works partially: Data is properly indexed to the second Splunk, without LAN1 and LAN2 data. Data containing LAN1 and LAN2 is indexed on the local indexer. However, the sourcetype mydevice:clone is also indexed on my local indexer, resulting in some data being indexed twice with two different sourcetypes. I don't understand why this is happening and I am seeking help to resolve this issue, I have the feeling that I miss something Thanks, Nicolas
I'm creating Mutiple Locked account search query while checking the account first if it has 4767 (unlocked) it should ignore account that has 4767 in a span of 4hrs This is my current search query... See more...
I'm creating Mutiple Locked account search query while checking the account first if it has 4767 (unlocked) it should ignore account that has 4767 in a span of 4hrs This is my current search query and not sure if the "join" command is working. index=* | join Account_Name [ search index=* EventCode=4740 OR EventCode=4767 | eval login_account=mvindex(Account_Name,1) | bin span=4h  _time | stats count values(EventCode) as EventCodeList count(eval(match(EventCode,"4740"))) as Locked ,count(eval(match(EventCode,"4767"))) as Unlocked by Account_Name | where Locked >= 1 and Unlocked = 0 ] | stats count dc(login_account) as "UniqueAccount" values(login_account) as "Login_Account" values(host) as "HostName" values(Workstation_Name) as Source_Computer values(src_ip) as SourceIP by EventCode| where UniqueAccount >= 10
So if I'm not to use  | eval _raw="StandardizedAddres SUCCEEDED - FROM: {\"StandardizedAddres.................." Should I use  | eval msgTxt="StandardizedAddres SUCCEEDED - FROM: {\"StandardizedAd... See more...
So if I'm not to use  | eval _raw="StandardizedAddres SUCCEEDED - FROM: {\"StandardizedAddres.................." Should I use  | eval msgTxt="StandardizedAddres SUCCEEDED - FROM: {\"StandardizedAddres\":\"SUCCEEDED\",\"FROM\":{\"Address1\":\"123 NAANNA SAND RD\",\"Address2\":\"\",\"City   And do I not include the /?
Hi @predatorz  These are just two of many components that make up the Splunk product and presumably abstracted away from Splunkd to prevent a huge monolithic system. The main Spunkd process will lau... See more...
Hi @predatorz  These are just two of many components that make up the Splunk product and presumably abstracted away from Splunkd to prevent a huge monolithic system. The main Spunkd process will launch child processes such as these depending on your configuration and features enabled. It sounds like Nessus is being overcautious here however if you require confirmation and exactly what the process is doing then I would recommend reaching out to Splunk Support or your Account Team who should be able to help further.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.  
Just heads up, it was indeed an issue with the extraction of the fields, my event are so big that splunk stops extracting fields at some point. Thanks all for the help