All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That's one way to do it. Judging from your working code you want to replace the single digit with 0<digit> in any of those two fields, not just when both parts are short (which was suggested by your... See more...
That's one way to do it. Judging from your working code you want to replace the single digit with 0<digit> in any of those two fields, not just when both parts are short (which was suggested by your initial sample). You can just do it with | input lookup dsa.csv | rex mode=sed field=Description "s/\b\d\b/0&/g"  
There is no such thing as "index listening". It's forwarder's job to collect data, prepare it properly (most importantly add proper metadata like source, sourcetype, host and destination index) and s... See more...
There is no such thing as "index listening". It's forwarder's job to collect data, prepare it properly (most importantly add proper metadata like source, sourcetype, host and destination index) and send it to the destination indexer or intermediate forwarder. So you don't have to change anything on the index side itself. Index is just a "bag" receiving events flowing from your forwarders. You need to find where the data comes from and check forwarder's configuration on that system. If this particular piece of configuration is being pushed from the deployment server in a pre-set state, that might be a bit more complicated. But the question which can affect other stuff as well (like apps assigned to this server) is how the server syslog_01 was "migrated" to syslog_02. Especially concerning the splunk forwarder's config. If it was simply moved from one server to another there is a possibility that the forwarder's name might have been set to a static value in the config and has been retained after the configuration was moved so your new forwarder will still report to your DS under the old name. Messy.
@yuanliu  This also working fine. Thanks for your suggestion.
  Hello Splunkers!! As per the below screenshot, you can see jobs are running fine. But events are not collecting into summary index. Please help me to suggest some potential reason and fixes ... See more...
  Hello Splunkers!! As per the below screenshot, you can see jobs are running fine. But events are not collecting into summary index. Please help me to suggest some potential reason and fixes   Scheduled search with push data to summary index.      
That's because you're collecting the contents of the event in a field called logEvent. If you want to collect this as raw event, you obviously have to set the _raw field. You are aware that using ot... See more...
That's because you're collecting the contents of the event in a field called logEvent. If you want to collect this as raw event, you obviously have to set the _raw field. You are aware that using other sourcetype than stash (or stash_hec for output_format=hec) uses up your license? You can also have issues with timestamps if you don't set _time properly before collecting (and generally you should set all default metadata fields)
You're getting close. One streamstats is not enough because you can't "pull" events you already passed while processing the stream. Assuming you want to find when you have at least three consecutiv... See more...
You're getting close. One streamstats is not enough because you can't "pull" events you already passed while processing the stream. Assuming you want to find when you have at least three consecutive ACC=1, you can do it like this   | eval Description=case(RML<104.008, "0", RML>108.425, "1", RML>=104.008, "OK", RML<=108.425, "OK") | eval Warning=case(Description==0, "LevelBreach", Description==1, "LevelBreach") | table LWL UWL RML | eval CR=if(RML<UWL,"0",if(RML>LWL,"1","0")) | streamstats window=3 sum(ACC) as running_count   This will mark the last of three consecutive ACC=1 with running_count=3. So we're on the right track so far we've found where our streak ends. Now we have to do a little trick since we can't pull events "from behind", we need to | reverse So that we're looking at our events in the other order. Now we know that event with running_count=3 will be starting our 3-event streak. So now we have to mark our 3 events looking forward | streamstats current=t window=3 max(running_count) as mark_count This will give us a value of markcount=3 for all events for which any of the last three events had  running_count of 3 (which means that we're no further than 3 events from the _last_ event of our 3 event streak).  Now all we have to do is find all those events we marked | where mark_count=3 And now we can just tidy up after ourseves | fields - running_count markcount | reverse And there you have it. Unfortunately since it uses the reverse command it can be quite memory consuming (and might even have some limits I'm not aware of at this time).
I want to manually add an event to an index, using collect seems to be the most straight forward method. I am asking for a method to use makeresults and eval to add field quotes like the native Aruba... See more...
I want to manually add an event to an index, using collect seems to be the most straight forward method. I am asking for a method to use makeresults and eval to add field quotes like the native Aruba SNMP log format to send in raw format to an index Background: We had a power outage at one of our sites. Report and Alert searches look for active user Wi-Fi sessions. Because the access points were offline, when users left for the day the Wi-Fi session end log events were not sent from Aruba to Splunk , which is causing false positive alerts. The Aruba SNMP logs look like this:  timestamp=1723828026 notification_from_address = "172.20.0.69" notification_from_port = "34327" SNMPv2-SMI::mib-2.1.3.0 = "10679000" SNMPv2-SMI::snmpModules.1.1.4.1.0 = "1.3.6.1.4.1.14823.2.3.1.11.1.2.1219" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.60 = "0x07e808100a0706002d0700" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.51.0 = "192.168.50.54" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.52.0 = "0xd8be1f2f9c1a" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.3.0 = "0x2462ce8053b1" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.94.0 = "RAP1053a" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.28.0 = "0" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.59.0 = "0" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.103.0 = "2" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.136.0 = "11" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.137.0 = "1" My search: | makeresults | eval timeStamp=now() | eval logEvent="timestamp=1723830464 notification_from_address = \"172.20.0.17\" notification_from_port = \"43015\" SNMPv2-SMI::mib-2.1.3.0 = \"2063900\" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.60 = \"0x07e8080e0d310f002d0700\" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.51.0 = \"192.168.50.67\" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.52.0 = \"0xd8be1f7d1076\" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.3.0 = \"0x482f6b06b171\" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.94.0 = \"AP7\" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.28.0 = \"0\" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.59.0 = \"0\" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.103.0 = \"2\" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.136.0 = \"10\" SNMPv2-SMI::enterprises.14823.2.3.1.11.1.1.137.0 = \"1\"" | collect index=aruba_snmp sourcetype=snmp_traps output_format=raw testmode=true The search result looks like what I want but when sent in raw format the escape \ are visible. How do I obscure or remove the \ in raw format? Thank you for any help in advance.
Hey there! It sounds like you should have a deployment server (https://docs.splunk.com/Documentation/Splunk/9.3.0/Updating/Deploymentserverarchitecture) somewhere in the mix. The server classes menti... See more...
Hey there! It sounds like you should have a deployment server (https://docs.splunk.com/Documentation/Splunk/9.3.0/Updating/Deploymentserverarchitecture) somewhere in the mix. The server classes mentioned should be controlled there. This should be different than the index cluster master. The universal forwarders get their configurations from the deployment server. You should be able to go into the deployment server and both see the configuration for this index as well as assign it to the appropriate server (once it has a deploymentclient.conf, at least). If you can find the deployment server, happy to help further.
Hi Experts, Is it possible to change the "Return to Splunk" link on the home page so that it goes to custom URL instead of default URL? If anyone knows how to do this, I'd appreciate the help! ... See more...
Hi Experts, Is it possible to change the "Return to Splunk" link on the home page so that it goes to custom URL instead of default URL? If anyone knows how to do this, I'd appreciate the help! Thanks
So, I figured out that percentages don't work well with dynamic element backgrounds how can I work around that? 
Hello, There is an index named "linux" in our environment that needs to have the source universal forwarder changed to reflect a new server that is forwarding data. In other words, a server "syslog... See more...
Hello, There is an index named "linux" in our environment that needs to have the source universal forwarder changed to reflect a new server that is forwarding data. In other words, a server "syslog_01.server.net" was migrated to a new server "syslog_02.server.net". (not the actual domains.) The index "linux", I believe, is still listening to syslog_01, and needs to be changed to syslog_02. The universal forwarder was installed on the syslog_02 server. So I have two fairly high-level questions: 1.) How would I go about see the current configuration of the "linux" index (at least in terms of where it is listening?) 2.) How would I change where this index is listening? I've inherited the Splunk environment and am still a little fuzzy on how it was originally configured (the person who set it up no longer works here), but it looks like the data path goes like this: Universal forwarder  > heavy forwarder server > two index servers < master server to control index servers. I believe this is a standard configuration. The person who set up the environment left scant documentation regarding universal forwarder configuration. Apparently, universal forwarders are "Configured automatically by adding new universal forwarder server to linux_outputs or windows_outputs class" in the master server. However in the master server (splunk_home/etc/system/local), serverclass.conf doesn't contain any data. Although, I'm not entirely sure this would be the correct config file to change. Again, I'm fairly new to this environment and not sure how to proceed. Any and all input would be appreciated. Thank you!
Thank you richgalloway  ! it's works 
I really need help I'm trying to get my panels to move from red to green based on live stats, but nothing works. I tried the UI and I'm pretty sure I got the right thing selected but my panels won't ... See more...
I really need help I'm trying to get my panels to move from red to green based on live stats, but nothing works. I tried the UI and I'm pretty sure I got the right thing selected but my panels won't show up red, yellow or green can anyone please help me out. So, I figured out that percentages don't work well with dynamic element backgrounds how can I work around that?   
Have you tried the stats command? ... | stats distinct_count(username) as users by VLAN ...  
Hi, I have a scenario where I want to calculate the duration between 1st and last event. The thing is these events can happen multiple times for the same session.  The 1st event can happen multiple ... See more...
Hi, I have a scenario where I want to calculate the duration between 1st and last event. The thing is these events can happen multiple times for the same session.  The 1st event can happen multiple times and everytime it is the exact same thing but I only want the transaction to start from very first event so that we know what is the exact duration. Sample events below - See the last 2 events where one says MatchPending and another one says MatchCompleted. What I want is to calculate the duration between 1st event and last event where it says MatchCompleted   2024-08-16 13:43:34,232|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Sending GET request to https://myapi.com/test 2024-08-16 13:43:38,630|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Response Received in 114 milliseconds "200 OK" response for GET request to https://myapi.com/test: "status":"MatchPending" 2024-08-16 13:43:50,516|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Sending GET request to https://myapi.com/test 2024-08-16 13:43:57,630|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Response Received in 114 milliseconds "200 OK" response for GET request to https://myapi.com/test: "status":"MatchPending" 2024-08-16 13:44:15,516|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Sending GET request to https://myapi.com/test 2024-08-16 13:43:50,510|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Response Received in 114 milliseconds "200 OK" response for GET request to https://myapi.com/test: "status":"MatchCompleted"     Any help is appreciated.  Best Regards, Shashanlk
After reading through these cases, I can not still figure out an effective way to resolve my issue. I repeated the installation process carefully, but the result was the same. The Events Service resi... See more...
After reading through these cases, I can not still figure out an effective way to resolve my issue. I repeated the installation process carefully, but the result was the same. The Events Service resides on the same machine as Enterprise Console and Controller. The machine host name is specified in /etc/hosts as "appd-ctl", the name shown in the error messages. I even shutdown the firewall when starting the Events Service, but there was no difference. -------------------- error message ------------ Task failed: Starting the Events Service api store node on host: appd-ctl as user: root with message: Connection to [<a href="<a href="http://appd-ctl:9080/_ping" target="_blank">http://appd-ctl:9080/_ping</a>" target="_blank"><a href="http://appd-ctl:9080/_ping" target="_blank">http://appd-ctl:9080/_ping</a></a>] failed due to [Failed to connect to appd-ctl/fe80:0:0:0:be24:11ff:fe59:c94a%2:9080]. --------------------------------------------------------------------- . . . . . . . INFO [2024-08-16 14:35:06.202] com.appdynamics.orcha.core.executor.DefaultPlaybookExecutor: Executing : Starting the Events Service api store node                  ERROR [2024-08-16 14:45:07.060] com.appdynamics.orcha.core.executor.DefaultPlaybookExecutor: Error executing task: TaskSpec[async=false,deleteDir=true,heartbeatInterval=60,ignoreErrors=false,localModule=false,longRunning=false,longRunningInitInterval=500,longRunningPollInterval=1000,longRunningWriteInterval=200,name=Starting the Events Service api store node,operationSpecList=[ShellSpec[chdir=/opt/appdynamics/platform/product/events-service/processor,command=[nohup, bin/events-service.sh, start, -p, conf/events-service-api-store.properties],commandForLogging=<null>,consoleInputFile=<null>,consoleOutputFile=<null>,environment={DONOTDAEMONIZE=true,JAVA_HOME=/opt/appdynamics/platform/product/jre/1.8.0_422},failOnError=true,hideContext=false,spawn=true,timeout=0,assertionSpecList=[],isLocalOperation=false,name=shell], UriSpec[body=<null>,failOnError=true,format=TEXT,fragment=<null>,headers=<null>,hiddenQueryStringVars=<null>,host=<null>,ignoreStatusCode=-1,method=get,path=<null>,port=<null>,query=<null>,scheme=<null>,timeout=60,uri=http://appd-ctl:9080/_ping,userName=<null>,retryInterval=5,retryTimeout=600,assertionSpecList=[],isLocalOperation=false,name=uri]],registerOutputAs=<null>,runLocal=false,separateProcess=false,taskSshTimeout=7200000,uid=<null>,updateThreshold=30] ERROR [2024-08-16 14:45:07.073] com.appdynamics.orcha.core.OrchaRunnerImpl: Error executing: ERROR [2024-08-16 14:45:07.085] com.appdynamics.platformadmin.core.job.JobProcessor: Platform/Job [1/818959e9-5f25-49af-9510-591ca406a7f0]: Stage [Start Events Service Cluster] failed due to [Task failed: Starting the Events Service api store node on host: appd-ctl as user: root with message: Connection to [http://appd-ctl:9080/_ping] failed due to [Failed to connect to appd-ctl/fe80:0:0:0:be24:11ff:fe59:c94a%2:9080].]
We have had this same issue in our environment.  The fix we have come up with is to create an app for the 4300X events that are specific to cisco:ftd where we parse out all the fields (you can use kv... See more...
We have had this same issue in our environment.  The fix we have come up with is to create an app for the 4300X events that are specific to cisco:ftd where we parse out all the fields (you can use kvmode=auto for these, but some fields like url don't get extracted correctly since they oftentimes have '=' in the url string).   In order to address the other message IDs that match number/format with cisco:asa, we make a 'custom' app and update it each time we update Splunk_TA_cisco-asa.  In that app (we named it Splunk_TA_cisco-asa-ftd) we just copy over the /default/ and /local/ props.conf files and change the sourcetype declaration from [cisco:asa] to [cisco:ftd].  The other files (like transforms) aren't needed because splunk already has those definitions in the Splunk_TA_cisco-asa app, you just need to tell it to do all the eval/extract/transform/etc functions from props.conf for the other sourcetype.  If you use eventtypes, you should also update that. We updated Splunk_TA_cisco-asa/local/eventtypes.conf to use 'sourcetype IN (cisco:asa, cisco:ftd)' to address that issue in the 'standard' app. I know that seems like a lot of customization, but after doing the customization / upgrade a few times, it's not so bad.   
Hi @Lijesh.Athyalath, Were you able to read the latest reply your post got? If it helped answer your question, please click the "Accept as Solution" button on that reply.  If it does not answer ... See more...
Hi @Lijesh.Athyalath, Were you able to read the latest reply your post got? If it helped answer your question, please click the "Accept as Solution" button on that reply.  If it does not answer your question, reply to this post to keep the conversation going. 
Hi @Sarath Kumar.Sarepaka, Were you able to check out the latest reply your post got? If it helped answer your question, click the "Accept as Solution" button on the reply that helped.  If it do... See more...
Hi @Sarath Kumar.Sarepaka, Were you able to check out the latest reply your post got? If it helped answer your question, click the "Accept as Solution" button on the reply that helped.  If it does not answer your question, reply to this post to keep the conversation going. 
Hi @Nivedita.Kumari, We're you able to find a solution or check out the linked Documentation? Or do you still need help?