All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks @danielcj , think this is exactly what I needed. Somehow I never came across this when googling and searching around here.
@ITWhisperer @gcusello In Hadoop ResourceManager,  Once after "Operation=Submit Application Request" resourcemanager will "Allocate New ApplicationID". I would like to see how much time difference be... See more...
@ITWhisperer @gcusello In Hadoop ResourceManager,  Once after "Operation=Submit Application Request" resourcemanager will "Allocate New ApplicationID". I would like to see how much time difference between 2 sub searches in the splunk query.   
Hello @jamesbanday , The message "is not a known identity" is related to the Identities that are configured in the Splunk Enterprise Security, probably this user is not configured in the Identities ... See more...
Hello @jamesbanday , The message "is not a known identity" is related to the Identities that are configured in the Splunk Enterprise Security, probably this user is not configured in the Identities lookup. To configure Assets & Identities in Splunk Enterprise Security, use the following doc: https://docs.splunk.com/Documentation/ES/7.3.0/Admin/Addassetandidentitydata. Also, you could check in Splunk if this identity exists using the identities macro:   | `identities` | search identity=<NAME_IDENTITY>   Thanks.
on that same link, they have given a good search explanation. may i know if you read it.. may i know what confusion you have after reading that, thanks. 
Hello @MattH665 , I believe that you are looking for the setting: hostname = <your_hostname> in alert_actions.conf  - https://docs.splunk.com/Documentation/Splunk/latest/Admin/Alertactionsconf#GLOB... See more...
Hello @MattH665 , I believe that you are looking for the setting: hostname = <your_hostname> in alert_actions.conf  - https://docs.splunk.com/Documentation/Splunk/latest/Admin/Alertactionsconf#GLOBAL_SETTINGS examples: http://splunkserver:8000, https://splunkserver.example.com:443 Remember to restart your instance after the changes. Thanks. 
Hello @WumboJumbo675 , 1 - Confirm in Splunk Cloud if the internal logs from Heavy Forwarder are being indexed (I believe yes, since you said some logs are correct). If yes, the issue is between U... See more...
Hello @WumboJumbo675 , 1 - Confirm in Splunk Cloud if the internal logs from Heavy Forwarder are being indexed (I believe yes, since you said some logs are correct). If yes, the issue is between UFs > HFs communication. index=_internal host=<host_name_heavy_forwarder> 2 - Confirm if the communication between UFs and HFs is working correctly. Look for ERROR messages or tcpout error messages in the UFs:  $SPLUNK_HOME/var/log/splunk/splunkd.log  3 - Execute a btool check to confirm if there are no syntax errors on the .conf files on UFs: splunk btool check 4 - Check the precedence of the inputs.conf files using btool to confirm that the inputs are being read: splunk btool inputs list --debug 5 - Confirm if there is a "wineventlog" index created in Splunk Cloud.   Let me know if this helps. Thanks.
Hello @phanikumarcs , The spath command is duplicating the values of this event. Please try the following not using the spath command: index=myindex sourcetype=mysourcetype | table analytics{}.dest... See more...
Hello @phanikumarcs , The spath command is duplicating the values of this event. Please try the following not using the spath command: index=myindex sourcetype=mysourcetype | table analytics{}.destination, analytics{}.messages, analytics{}.inflightMessages   Thanks.
Hi, We have Splunk running behind a load balancer so we reach it on the standard port 443. But on the backend it's using a different port, which the LB connects to, hence this port needs to stay se... See more...
Hi, We have Splunk running behind a load balancer so we reach it on the standard port 443. But on the backend it's using a different port, which the LB connects to, hence this port needs to stay set as the Web port. Problem is when we get alerts, Splunk still puts that port from the Web port setting in the url. So the url doesn't work and we need to manually edit it to remove the port. Is there no separate setting for this so that the actual listening port and the port it puts in the url can be controlled separately?   
You do not replace data in Splunk - if you ingest it to an index it remains there until it expires. It's a time based storage so every piece of data gets a timestamp that reflects the event creation ... See more...
You do not replace data in Splunk - if you ingest it to an index it remains there until it expires. It's a time based storage so every piece of data gets a timestamp that reflects the event creation in some way. So, every day when you ingest those 200 rows, they will, if setup to do so, have a date stamp of the day they are ingested.  If you only ever search a single day's data you will get the latest data. If you make a very short retention period on the index, the data will age out and disappear after that time. The alternative is to make those rows a lookup, in which case the data IS replaced, as you can overwrite the lookup, however, creating a lookup and ingesting to an index is not the same process. What is your use case for this data?  
I can confirm this is still on issue. Version:9.0.2303.202 Build:06d6be78fc0e Setting the Base Search to use the global time selector's token and verifying the chain searches are using the same to... See more...
I can confirm this is still on issue. Version:9.0.2303.202 Build:06d6be78fc0e Setting the Base Search to use the global time selector's token and verifying the chain searches are using the same token is not sufficient in getting the time selector to update the panels.  they just stay frozen when changing the time selector.  Dashboards cannot be optimized properly if we cannot use base searches. i cannot take over the world with this bug in place.
Is there a way to add an interval setting to define the polling for a flat file? Not sure why it was requested but i was asked if it was possible and thought for sure it was only to find that it is c... See more...
Is there a way to add an interval setting to define the polling for a flat file? Not sure why it was requested but i was asked if it was possible and thought for sure it was only to find that it is currently not an option according to the inputs.conf section in the admin manual. https://docs.splunk.com/Documentation/Splunk/latest/admin/Inputsconf I read that the default polling may be 1ms, to collect the modified file in near real time. I offered an alternative to create a sh or ps to Get-Content or some other scripting language and then set an interval to read the flat file at their desired time, however i would have to duplicate all of the options available for a file monitor stanza such as crcsalt and whitelist blacklist within the script which would have to be code reviewed and go through a lengthy pipeline. ANy help would be appreciated to say if this is a definite no go or if it is a possible ehancement request to splunk for the next version. thank you    
Hi, When i want to extract the fields from JSON (below) destination,messages, inflightMessages. This the one of the latest event: { "analytics": [ { "destination": "billing.ev... See more...
Hi, When i want to extract the fields from JSON (below) destination,messages, inflightMessages. This the one of the latest event: { "analytics": [ { "destination": "billing.events.prod", "messages": 0, "inflightMessages": 0 }, { "destination": "billing.events.dev", "messages": 0, "inflightMessages": 0 }, { "destination": "hub.values.prod", "messages": 0, "inflightMessages": 0 }, { "destination": "hub.fifo-prod", "messages": 0, "inflightMessages": 0 } ] } This is the spl i am using: index=myindex sourcetype=mysourcetype | spath input=_raw | table analytics{}.destination, analytics{}.messages, analytics{}.inflightMessages   Where i am getting in the intrested fields  "analytics{}.destination" for this when i move curser to see values and count associated, for each value showing count 2, when you search for one event.   Why this is happening what is the issue? This data generally mulesoftmq.      
Hello - Admitted new guy here, I have a heavy forwarder sending data from a MySql database table into Splunk once a day.  Works great.  But now I want to send the data from a 'customer' type table ... See more...
Hello - Admitted new guy here, I have a heavy forwarder sending data from a MySql database table into Splunk once a day.  Works great.  But now I want to send the data from a 'customer' type table with about 200 rows, and I would like to replace the data every day, rather than append 200 new rows in the index every day. How is this best accomplished?  Tried searching, but I may not even be using the correct terminology.
This is great! Good steps to follow, thank you!
Thanks for responding. I'll proceed and see how it goes!
@divyabarsode  We go to MC > risk Analysis > ad- hoc score > choose the object> reduce manually.  PFA- Screen shot of it
The correct workaround should have been     [tcpout] negotiateProtocolLevel = 5      negotiateProtocolLevel = 0 is no longer valid (see enableOldS2SProtocol in 9.1.x outputs.conf) with 9.1.x a... See more...
The correct workaround should have been     [tcpout] negotiateProtocolLevel = 5      negotiateProtocolLevel = 0 is no longer valid (see enableOldS2SProtocol in 9.1.x outputs.conf) with 9.1.x and is likely to cause issues.
Hi If I understood correctly, you have one forwarder which are sending those events to different indexers. As those are configured to send one by one based on your input and one target is down it can... See more...
Hi If I understood correctly, you have one forwarder which are sending those events to different indexers. As those are configured to send one by one based on your input and one target is down it cannot send to that. Based on your outputs.conf, it’s quite probably that it’s just waiting that this target (your dev, which has patched) will be available and then it continue with next. This is normal issue when you are replicating outputs e.g. to splunk and syslog server. r. Ismo
Hi here is an old post about migrating distributed splunk environment. As long as you use Linux there shouldn’t issues with different os distroes under migration time. Just keep splunk version same ... See more...
Hi here is an old post about migrating distributed splunk environment. As long as you use Linux there shouldn’t issues with different os distroes under migration time. Just keep splunk version same on old and new nodes until you have done the migration.  https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538062 r. Ismo
Hi An old answer https://community.splunk.com/t5/Splunk-Search/How-to-find-which-indexes-are-used/m-p/674463 which answer to your questions too. r. Ismo