All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @WumboJumbo675 , 1 - Confirm in Splunk Cloud if the internal logs from Heavy Forwarder are being indexed (I believe yes, since you said some logs are correct). If yes, the issue is between U... See more...
Hello @WumboJumbo675 , 1 - Confirm in Splunk Cloud if the internal logs from Heavy Forwarder are being indexed (I believe yes, since you said some logs are correct). If yes, the issue is between UFs > HFs communication. index=_internal host=<host_name_heavy_forwarder> 2 - Confirm if the communication between UFs and HFs is working correctly. Look for ERROR messages or tcpout error messages in the UFs:  $SPLUNK_HOME/var/log/splunk/splunkd.log  3 - Execute a btool check to confirm if there are no syntax errors on the .conf files on UFs: splunk btool check 4 - Check the precedence of the inputs.conf files using btool to confirm that the inputs are being read: splunk btool inputs list --debug 5 - Confirm if there is a "wineventlog" index created in Splunk Cloud.   Let me know if this helps. Thanks.
Hello @phanikumarcs , The spath command is duplicating the values of this event. Please try the following not using the spath command: index=myindex sourcetype=mysourcetype | table analytics{}.dest... See more...
Hello @phanikumarcs , The spath command is duplicating the values of this event. Please try the following not using the spath command: index=myindex sourcetype=mysourcetype | table analytics{}.destination, analytics{}.messages, analytics{}.inflightMessages   Thanks.
Hi, We have Splunk running behind a load balancer so we reach it on the standard port 443. But on the backend it's using a different port, which the LB connects to, hence this port needs to stay se... See more...
Hi, We have Splunk running behind a load balancer so we reach it on the standard port 443. But on the backend it's using a different port, which the LB connects to, hence this port needs to stay set as the Web port. Problem is when we get alerts, Splunk still puts that port from the Web port setting in the url. So the url doesn't work and we need to manually edit it to remove the port. Is there no separate setting for this so that the actual listening port and the port it puts in the url can be controlled separately?   
You do not replace data in Splunk - if you ingest it to an index it remains there until it expires. It's a time based storage so every piece of data gets a timestamp that reflects the event creation ... See more...
You do not replace data in Splunk - if you ingest it to an index it remains there until it expires. It's a time based storage so every piece of data gets a timestamp that reflects the event creation in some way. So, every day when you ingest those 200 rows, they will, if setup to do so, have a date stamp of the day they are ingested.  If you only ever search a single day's data you will get the latest data. If you make a very short retention period on the index, the data will age out and disappear after that time. The alternative is to make those rows a lookup, in which case the data IS replaced, as you can overwrite the lookup, however, creating a lookup and ingesting to an index is not the same process. What is your use case for this data?  
I can confirm this is still on issue. Version:9.0.2303.202 Build:06d6be78fc0e Setting the Base Search to use the global time selector's token and verifying the chain searches are using the same to... See more...
I can confirm this is still on issue. Version:9.0.2303.202 Build:06d6be78fc0e Setting the Base Search to use the global time selector's token and verifying the chain searches are using the same token is not sufficient in getting the time selector to update the panels.  they just stay frozen when changing the time selector.  Dashboards cannot be optimized properly if we cannot use base searches. i cannot take over the world with this bug in place.
Is there a way to add an interval setting to define the polling for a flat file? Not sure why it was requested but i was asked if it was possible and thought for sure it was only to find that it is c... See more...
Is there a way to add an interval setting to define the polling for a flat file? Not sure why it was requested but i was asked if it was possible and thought for sure it was only to find that it is currently not an option according to the inputs.conf section in the admin manual. https://docs.splunk.com/Documentation/Splunk/latest/admin/Inputsconf I read that the default polling may be 1ms, to collect the modified file in near real time. I offered an alternative to create a sh or ps to Get-Content or some other scripting language and then set an interval to read the flat file at their desired time, however i would have to duplicate all of the options available for a file monitor stanza such as crcsalt and whitelist blacklist within the script which would have to be code reviewed and go through a lengthy pipeline. ANy help would be appreciated to say if this is a definite no go or if it is a possible ehancement request to splunk for the next version. thank you    
Hi, When i want to extract the fields from JSON (below) destination,messages, inflightMessages. This the one of the latest event: { "analytics": [ { "destination": "billing.ev... See more...
Hi, When i want to extract the fields from JSON (below) destination,messages, inflightMessages. This the one of the latest event: { "analytics": [ { "destination": "billing.events.prod", "messages": 0, "inflightMessages": 0 }, { "destination": "billing.events.dev", "messages": 0, "inflightMessages": 0 }, { "destination": "hub.values.prod", "messages": 0, "inflightMessages": 0 }, { "destination": "hub.fifo-prod", "messages": 0, "inflightMessages": 0 } ] } This is the spl i am using: index=myindex sourcetype=mysourcetype | spath input=_raw | table analytics{}.destination, analytics{}.messages, analytics{}.inflightMessages   Where i am getting in the intrested fields  "analytics{}.destination" for this when i move curser to see values and count associated, for each value showing count 2, when you search for one event.   Why this is happening what is the issue? This data generally mulesoftmq.      
Hello - Admitted new guy here, I have a heavy forwarder sending data from a MySql database table into Splunk once a day.  Works great.  But now I want to send the data from a 'customer' type table ... See more...
Hello - Admitted new guy here, I have a heavy forwarder sending data from a MySql database table into Splunk once a day.  Works great.  But now I want to send the data from a 'customer' type table with about 200 rows, and I would like to replace the data every day, rather than append 200 new rows in the index every day. How is this best accomplished?  Tried searching, but I may not even be using the correct terminology.
This is great! Good steps to follow, thank you!
Thanks for responding. I'll proceed and see how it goes!
@divyabarsode  We go to MC > risk Analysis > ad- hoc score > choose the object> reduce manually.  PFA- Screen shot of it
The correct workaround should have been     [tcpout] negotiateProtocolLevel = 5      negotiateProtocolLevel = 0 is no longer valid (see enableOldS2SProtocol in 9.1.x outputs.conf) with 9.1.x a... See more...
The correct workaround should have been     [tcpout] negotiateProtocolLevel = 5      negotiateProtocolLevel = 0 is no longer valid (see enableOldS2SProtocol in 9.1.x outputs.conf) with 9.1.x and is likely to cause issues.
Hi If I understood correctly, you have one forwarder which are sending those events to different indexers. As those are configured to send one by one based on your input and one target is down it can... See more...
Hi If I understood correctly, you have one forwarder which are sending those events to different indexers. As those are configured to send one by one based on your input and one target is down it cannot send to that. Based on your outputs.conf, it’s quite probably that it’s just waiting that this target (your dev, which has patched) will be available and then it continue with next. This is normal issue when you are replicating outputs e.g. to splunk and syslog server. r. Ismo
Hi here is an old post about migrating distributed splunk environment. As long as you use Linux there shouldn’t issues with different os distroes under migration time. Just keep splunk version same ... See more...
Hi here is an old post about migrating distributed splunk environment. As long as you use Linux there shouldn’t issues with different os distroes under migration time. Just keep splunk version same on old and new nodes until you have done the migration.  https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538062 r. Ismo
Hi An old answer https://community.splunk.com/t5/Splunk-Search/How-to-find-which-indexes-are-used/m-p/674463 which answer to your questions too. r. Ismo
Hi, I've got a problem with this playbook code block, the custom functions I try to execute seem to hang indefinitely, I also know the custom function works because I've successfully used it from a u... See more...
Hi, I've got a problem with this playbook code block, the custom functions I try to execute seem to hang indefinitely, I also know the custom function works because I've successfully used it from a utility block  I've tried a few different arrangements of this logic including initializing cfid with both the custom function calls and consolidating custom function names into a single while loop with the phantom.completed and have used pass instead of sleep. But the custom function doesn't seem to return/complete.  Here's another example, which is basically the same except it consolidates the while loops and executes both the custom functions at the same time. Once either of these above scenarios (or something similar) are successful I need to get the results from the custom function  executions (below pic), combine it into a single string and then send "data" to another function: > post_http_data(container=container, body=json.dumps({"text": data})    Any assistance would be great. Thanks.  
Hi you can always ask that splunk split your enterprise license to 5 and 45GB license file. Then ask also 5GB ES license. Then just use separate LM where to put those two 5+5 files and use that for ... See more...
Hi you can always ask that splunk split your enterprise license to 5 and 45GB license file. Then ask also 5GB ES license. Then just use separate LM where to put those two 5+5 files and use that for your SIEM instance. This will fulfill official requirements. r. Ismo
Thanks @ITWhisperer  its working for me
Hi rule of thumbs. Never restore anything into running system unless your product support it! If you have single instance where you have take that backup, then you should use separate dummy/empty i... See more...
Hi rule of thumbs. Never restore anything into running system unless your product support it! If you have single instance where you have take that backup, then you should use separate dummy/empty instance where to restore it. I suppose that even that case you will have some issues with files e.g. hot buckets and buckets which has switch state from warm to cold or cold to frozen during your backup time. If you have used e.g. snapshot for backup then this is not so big issue. After restoration just switch this service up (change splunk node name or shutdown the primary instance first). If you have clustered environment then it’s much harder to get working backup and restore it. I really suggest that you use snapshots for backing up! You must take this at same time from all your indexers to get a consistent backup. I really like to empty test etc. environment for restoration! r. Ismo
Have UFs configured on several Domain Controllers that point to a Heavy Forwarder and that points to Splunk Cloud. Trying to configure Windows Event Logs. Application, System & DNS logs are working c... See more...
Have UFs configured on several Domain Controllers that point to a Heavy Forwarder and that points to Splunk Cloud. Trying to configure Windows Event Logs. Application, System & DNS logs are working correctly, however, no Security logs for any of the DCs are working. Splunk service is running with a service account that has proper admin permissions. I have edited the DC GPO to allow the service account access to 'Manage auditing and security logs' I am at a lose here. Not sure what else to troubleshoot. Here is in inputs.conf file on each DC [WinEventLog://Application] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = wineventlog [WinEventLog://Security] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = wineventlog [WinEventLog://System] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = wineventlog [WinEventLog://DNS Server] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = wineventlog