All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

See the Masa diagrams - https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 Timestamp extraction is one of the very first steps ... See more...
See the Masa diagrams - https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 Timestamp extraction is one of the very first steps in event processing. So even if you later decide to drop (send to nullQueue) some events, that will be done way later in the pipeline.
Depends on whether within the TA the conf is in the local or default  directory. If it's in local, it depends on the alphabetical order of apps. Read the document once again. And do a btool --debug t... See more...
Depends on whether within the TA the conf is in the local or default  directory. If it's in local, it depends on the alphabetical order of apps. Read the document once again. And do a btool --debug to verify. Also you're not using deployer to distribute apps to HFs. You're using delpyment server for it. Deployer is for search head cluster.
Yeah I tried out the LINE_BREAKER provided above but didn't seem to have any luck. No matter what I have tried I haven't been able to get it working as hoped. I think you're right in that the layout ... See more...
Yeah I tried out the LINE_BREAKER provided above but didn't seem to have any luck. No matter what I have tried I haven't been able to get it working as hoped. I think you're right in that the layout as is is just bad so I'm going to go back to the drawing board and try to change how the logs are formatted prior to hitting Splunk. 
Did you point your servers to a new license manager? (Not cluster master - that's a different functionality even if done on the same server)
Then check your push mode. If you want to push everything as is, you have to set it to "full" for this app.
Hello @PickleRick  for your insight, yes that's exactly what I am saying, the new nodes cany get license from the new cluster manager because  its clashing with the old terminated/obsoleted aws insta... See more...
Hello @PickleRick  for your insight, yes that's exactly what I am saying, the new nodes cany get license from the new cluster manager because  its clashing with the old terminated/obsoleted aws instance.  How do I resolve this please?
Are you sure it will work with multiline events? I'm not 100% sure which regex flags are on with SEDCMD
Hi @Geoff.Wild, So I'm doing my best to parse info from existing tickets. Here is some other info I found. Please use the hostname and port field and verify the behavior. (You can edit the collec... See more...
Hi @Geoff.Wild, So I'm doing my best to parse info from existing tickets. Here is some other info I found. Please use the hostname and port field and verify the behavior. (You can edit the collector config and use the hostname/port fields) (the info below most likely references older versions of agents) Replaced MySQL JDBC driver in  <db-agent>/lib  with 5.1.49v and renamed it as  mysql-connector-java-8.0.28.jar Restarted the DB Agent and the data started reporting for the problematic collectors I hope this helps lead you to try some new things.  
Hi Splunkers,  Have the following situation, and interested in another opinion: We have a distributed environment with clusters indexers and SHs, and HFs in distributed sites. We are using a depl... See more...
Hi Splunkers,  Have the following situation, and interested in another opinion: We have a distributed environment with clusters indexers and SHs, and HFs in distributed sites. We are using a deployer to push out CONFs to the HFs and other assets defined by serverclass. I am trying to set-up a configuration where the HFs are receiving data from a remote host inbound on a specific TCP port. HF Deployment App: local\inputs.conf in inputs.conf, there is a stanza for the expected data being input     Remote Host 1 [tcp:12345] index = indexA sourcetype = sourceType1 disabled = 0       Now there is a TA for this data type but it has an inputs.conf defined as:     [tcp://22245] connection_host = dns index = indexSomethingElse sourcetype = sourceType disabled = 0       Which one takes precedence? And if the indexes are different, will this mess up the ingestion and indexing? Am I right in assuming that the inputs.conf defined for the overall inputs take precedence? REF: https://docs.splunk.com/Documentation/Splunk/9.1.3/Admin/Wheretofindtheconfigurationfiles
Sorry I am bit lost here, how can I run the command if I don't use addcoltotals please? Without addcoltotals labelfield="Total Delivered" the field Total Delivered will not exists to do count by. A... See more...
Sorry I am bit lost here, how can I run the command if I don't use addcoltotals please? Without addcoltotals labelfield="Total Delivered" the field Total Delivered will not exists to do count by. And if I add the command addcoltotals labelfield="Total Delivered" to your suggestion, it defeats the purpose, unless I am thick which I very well can be!
Thanks Guys. it is working !!!!
Well, I tried the solution above, replaced mysql-connector-java-8.0.27.jar with an older version, mysql-connector-java-5.1.49.jar which I got from https://downloads.mysql.com/archives/get/p/3/file/my... See more...
Well, I tried the solution above, replaced mysql-connector-java-8.0.27.jar with an older version, mysql-connector-java-5.1.49.jar which I got from https://downloads.mysql.com/archives/get/p/3/file/mysql-connector-java-5.1.46.tar.gz  This broke the other Mysql db collectors and didn't fix the one I was trying to. So I reverted my change. The error message for them was: java.sql.SQLException: No suitable driver found for jdbc:mysql://myhost.mydomain:3306/
Hi @Amit.Bisht, I searched for your error and found a few existing posts that mention it. Please take a look and see if you can find a solution or a lead in them. https://community.appdynamics.c... See more...
Hi @Amit.Bisht, I searched for your error and found a few existing posts that mention it. Please take a look and see if you can find a solution or a lead in them. https://community.appdynamics.com/t5/forums/searchpage/tab/message?filter=location&q=%22Result:%20401%20Unauthorized%20-%20content:%22&noSynonym=false&inactive=false&advanced=true&location=category:Discussions&sort_by=topicPostDate&collapse_discussion=true&search_type=thread
This worked like a charm - thank you!
You won't find events without a timestamp because Splunk always stores every event with a timestamp.  If the event does not come with a timestamp or if the timestamp is invalid then Splunk will use t... See more...
You won't find events without a timestamp because Splunk always stores every event with a timestamp.  If the event does not come with a timestamp or if the timestamp is invalid then Splunk will use the timestamp from the previous event. The timestamp warning cited does not apply to same sourcetype as the nullQueue transform.  The warning is for wlc_syslog and the transform is for wlc_syslog_rt0.
Hi @richgalloway, @ITWhisperer  I have similar doubt but little tedious:  Use case:In my org primarily Mission Control events are investigated by SOC as soon as they pop up, if futher investigat... See more...
Hi @richgalloway, @ITWhisperer  I have similar doubt but little tedious:  Use case:In my org primarily Mission Control events are investigated by SOC as soon as they pop up, if futher investigation is needed the incident is escalated to Enterprise security TEAM who is responsible to perform deeper/detailed investigation and update back in Mission Control.  USE CASE:  The enterprise security manger wants a DASHBOARD which will inform him about :  if the investigation is being performed by his team (ES)> how much average time his team member takes to resolve an incident (for now I'm only focusing on this)> averaged over a month.  jeff is ES resource  & stephen is SOC resource  i want to pick end_time where resource is Stephen and notes is "Escalation to ES" and start_time where resource is jeff and subtract them in order to get claim_time_by_ES.  SO far the query I'm using but not successful yet is:    | mcincidents unwind_to=task | search incident_id="3e864839-xyzab" | eval is_es_team=if(IN(owner, "Jeff","Rama", "Mel"), 1, 0) | eval is_soc_team=if(IN(owner, "Stephen", "Crossman", "Ruby","Cole"), 1,0) | eval end_time_for_soc=if(is_soc_team==1 AND name=="Escalation to ES", end_time, null()) | eval start_time_for_ES=if(is_es_team==1, start_time, null()) | eval total_time_claimed=end_time_for_soc - start_time_for_ES   in the below snapshot of log the columns name are in sequence of:  owner > start_time > end_time > total_time_taken> notes    
I assumed (rather embarrassingly!) this restarted the deployment server splunkd! This is very useful. 
Thank you for sharing that link @burwell ! It was hugely helpful. What finally ended up working was the following: The additional where line was key. Thank you for helping me work through this! | ev... See more...
Thank you for sharing that link @burwell ! It was hugely helpful. What finally ended up working was the following: The additional where line was key. Thank you for helping me work through this! | eval _time = strptime(Opened_At,"%Y-%m-%d %H:%M:%S") | sort -_time | addinfo | where _time>=info_min_time AND (_time<=info_max_time OR info_max_time="+Infinity")  
  #!/bin/bash ########################## FUNC function UFYUM(){ cd /tmp rpm -Uvh --nodeps `curl -s https://www.splunk.com/en_us/download/universal-forwarder.html\?locale\=en_us | grep -oP '"https:... See more...
  #!/bin/bash ########################## FUNC function UFYUM(){ cd /tmp rpm -Uvh --nodeps `curl -s https://www.splunk.com/en_us/download/universal-forwarder.html\?locale\=en_us | grep -oP '"https:.*(?<=download).*x86_64.rpm"' |sed 's/\"//g' | head -n 1` yum -y install splunkforwarder.x86_64 sleep 5 } function UFDEB(){ cd /tmp wget `curl -s https://www.splunk.com/en_us/download/universal-forwarder.html\?locale\=en_us | grep -oP '"https:.*(?<=download).*amd64.deb"' |sed 's/\"//g' | head -n 1` -O amd64.deb dpkg -i amd64.deb sleep 5 } function UFConf(){ mkdir -p /opt/splunkforwarder/etc/apps/nwl_all_deploymentclient/local/ cd /opt/splunkforwarder/etc/apps/nwl_all_deploymentclient/local/ cat <<EOF> /opt/splunkforwarder/etc/apps/nwl_all_deploymentclient/local/app.conf [install] state = enabled [package] check_for_updates = false [ui] is_visible = false is_manageable = false EOF cat <<EOF> /opt/splunkforwarder/etc/apps/nwl_all_deploymentclient/local/deploymentclient.conf [deployment-client] phoneHomeIntervalInSecs = 60 [target-broker:deploymentServer] targetUri = XXXXXXXXXXXXXXXXXXXXXXX:8089 EOF cat <<EOF> /opt/splunkforwarder/etc/system/local/user-seed.conf [user_info] USERNAME = admin PASSWORD = XXXXXXXXXXXXXXXXXXXXXXXX EOF /opt/splunkforwarder/bin/splunk cmd btool deploymentclient list --debug /opt/splunkforwarder/bin/splunk start --accept-license } ######################################################### MAIN # Check for RPM package managers if command -v yum > /dev/null; then UFYUM UFConf else echo "No YUM package manager found." fi # Check for DEB package managers if command -v dpkg > /dev/null; then UFDEB UFConf else echo "No DEB package manager found." fi