All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey all I am taking input over TCP by having this in my inputs.conf   [tcp://1.2.3.4:123] connection_host = ip index = index1 sourcetype = access_combined   My question is, can I have the same p... See more...
Hey all I am taking input over TCP by having this in my inputs.conf   [tcp://1.2.3.4:123] connection_host = ip index = index1 sourcetype = access_combined   My question is, can I have the same port send data to multiple indexes? Ie. without opening additional ports on my firewall, can I have another host send data to the same port but land in a different index? I tried adding this   [tcp://5.6.7.8:123] connection_host = ip index = index2 sourcetype = access_combined   but that just stopped the ingestion altogether. Thanks.
Hi @sintjm , I’m a Community Moderator in the Splunk Community. This question was posted 8 years ago, so it might not get the attention you need for your question to be answered. We recommend tha... See more...
Hi @sintjm , I’m a Community Moderator in the Splunk Community. This question was posted 8 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
to make it clear about the existing condition. There is a list of hostname & ip that have different owner, also null owner and by default the hostname dropdown only show list hostname that have owner... See more...
to make it clear about the existing condition. There is a list of hostname & ip that have different owner, also null owner and by default the hostname dropdown only show list hostname that have owner value, and not show the hostname that doesnt have owner. How to refine this? Following is the related capture: and this for the search output:
Hi Team, Our Splunk environment, including Search Heads, Indexers, and CM, is hosted in the cloud and managed by Splunk Support. We manage our Deployment Master and Heavy Forwarder servers, which ar... See more...
Hi Team, Our Splunk environment, including Search Heads, Indexers, and CM, is hosted in the cloud and managed by Splunk Support. We manage our Deployment Master and Heavy Forwarder servers, which are hosted in Azure. We are ingesting logs from both Windows and Linux servers via Splunk Universal Forwarder. For some time, we have been ingesting IIS logs from all Windows machines, defining the sourcetype based on the application and environment. For instance, logs from an application server named "xyz" have a sourcetype of "xyz:iis:prod." However, our internal SOC team has identified that data parsing for these IIS logs is not occurring, and it needs to be addressed immediately without changing the host or sourcetype information. Currently, when the sourcetype is set to "iis," fields are auto-extracted, but when a different sourcetype is used, field extraction does not happen. I need to ensure that field extraction for Microsoft IIS logs works correctly while keeping the sourcetype unchanged. How can this be achieved?
hey a little years late but I'm just wondering if you changed the timestamp into epoch time before using the transaction command?
hey a little years late but I'm just wondering if you changed the timestamp into epoch time before using the transaction command 
you didn't say to drop the "g" at the end. of course your suggestion helped but not fully.
I am also facing the issue I can see my splunk home directory is  /opt/splunkforwarder.  I tried to change it via splunk-launc.conf but is not working. How to change the home directory to /opt/splu... See more...
I am also facing the issue I can see my splunk home directory is  /opt/splunkforwarder.  I tried to change it via splunk-launc.conf but is not working. How to change the home directory to /opt/splunk   @isoutamo 
This is worked.   Finally, I done this and solved 502 error (Include Server Error show after search) with AWS ALB. Set the same value (60 seconds) for busyKeepAliveIdleTimeout and Connection idle... See more...
This is worked.   Finally, I done this and solved 502 error (Include Server Error show after search) with AWS ALB. Set the same value (60 seconds) for busyKeepAliveIdleTimeout and Connection idle timeout of ALB. Disabled HTTP/2 on ALB SHC of Splunk 7.3.3 and 8.2.8 both worked.   I also found and verified this can solve error 502 of ALB. NLB -> ALB of Target Group Default value for busyKeepAliveIdleTimeout and Connection idle timeout of ALB. No need change any timeout settings. Ref. https://repost.aws/ja/knowledge-center/alb-static-ip
Please share the PROBLEM and RECOVERY events. (It is rather difficult to solve your problem without being able to see what events you are dealing with!)
Error while connecting AWS lambda with SignalFX
Just wanted to add the "final touch" which made the solution work as intended: Solved: Re: Defining a global token for alert recipients - Splunk Community
Sweet relief after so much trial and error, I could kiss you! Yes, this solution finally works! savedsearches.conf <basesearch> | table <something> | `macro`  macro.conf [macro] definition = eva... See more...
Sweet relief after so much trial and error, I could kiss you! Yes, this solution finally works! savedsearches.conf <basesearch> | table <something> | `macro`  macro.conf [macro] definition = eval _recipients="email1@email.com, email2@email.com" and finally in the savedsearches.conf (or To: field in the UI) action.email.to = $result._recipients$ And it finnaly works as intended!!!  Whish I could reward 100 karma for this Still think that this should be a "built in" thing available both in the GUI and config files, "email groups", but I'm to happy to care right now
Hi, everybody! I am an iOS engineer. We are using AppD recently, but there are some things that I am very confused about. So I put forward feedback and hope someone can help answer it. As shown... See more...
Hi, everybody! I am an iOS engineer. We are using AppD recently, but there are some things that I am very confused about. So I put forward feedback and hope someone can help answer it. As shown in the picture above, the red frame in the upper right corner of the picture. https://docs.appdynamics.com/appd/4.5.x/en/end-user-monitoring/mobile-real-user-monitoring/overview-of-the-controller-ui-for-mobile-rum/mobile-sessions#MobileSessions-SessionTimeline My questions are as follows: 1. What does "49 of 54 Sessions for this Agent" mean here? 2. When I click the arrows before and after the red frame text, the page can be switched to view different logs, so how are the contents of the current page and the next page divided? What does the log of the current page represent? 3. How is the cycle of a session calculated? Because I don't see the code for the relevant session in the code. 4. What does a session mean? How is it divided? Hope someone can answer it, Many thanks. Best regards.
This is indeed a nice alternative thank you!
UF host for last 60 minutes with now errors and warnings   IDX side    Still a problem here. This morning we had to reboot from the Splunk servers due to a security patch of the operating... See more...
UF host for last 60 minutes with now errors and warnings   IDX side    Still a problem here. This morning we had to reboot from the Splunk servers due to a security patch of the operating system. You can see it at the beginning of the graph. This meant that the connection between UF and IDX had to be re-established, i.e. when IDX or UF restarts, about 20 minutes yesterday and today 10 minutes is not the delay or batch processing.
Hi @Srini_551 , as @marnall said, Splunk isn't a tool for updating data because it doesn't use a database table, but you could use one of these workarounds to solve your needs: 1) schedule a searc... See more...
Hi @Srini_551 , as @marnall said, Splunk isn't a tool for updating data because it doesn't use a database table, but you could use one of these workarounds to solve your needs: 1) schedule a search that updates your lookup with the new alerts and access the lookup using the Splunk Lookup Editor App. 2) create a dashboard in wich you have two panels: one with all the alerts, so you can choose the alert to modify, then in the second panel, you display the selected row and, using a text input, you can update the row, at the end you can sabe the raw in the lookup. this solution runs only if you are using a kvstore that record a key for each row. First solution is easier to implement, but you must use the Splunk Lookup Editor App as interface. Ciao. Giuseppe
Tried this and it worked thanks
Any errors on either side of the connection?