All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Splunk SMEs, Good day, we face an issue after some deployment in splunk and we cannot connect now to Splunk HF DB Task Server. Initially is working fine, we have done upgrade in java version from... See more...
Hi Splunk SMEs, Good day, we face an issue after some deployment in splunk and we cannot connect now to Splunk HF DB Task Server. Initially is working fine, we have done upgrade in java version from coretto to zulu last month and its seem working fine. After some deployment it now cause the issue. Can anyone assist me and solve this.   Thanks Mel
Thanks for the quick response! Its working as expected. 
Thanks @richgalloway  for your inputs, Does the volume of data being sent to Splunk helps in determining which method to use between HEC and UF For our use case we plan to send events which has ass... See more...
Thanks @richgalloway  for your inputs, Does the volume of data being sent to Splunk helps in determining which method to use between HEC and UF For our use case we plan to send events which has associated information (a json ~400 bytes.) and we may not be sending more than 5000 such events/day. You also mentioned about the client to get Acks for events sent via HEC and we do plan to have that. Based on the volume and our use case do you suggest we go with HEC? Also , while building and add-on is it possible to add a query which will identify specific events as alerts and ship that with add-on which customer can install in their Splunk setup?
Thank you. Yes I was wrong about transforms.conf actually I wan to generate sourcetype from elastic:auditbeat:log based on events as This link has specified.
index="ss-stg-dkp" cluster_name="*" AND namespace=dcx AND (label_app="composite-*" ) sourcetype="kube:container:main" | rex \"status\":"(?<Http_code>\d+)" | rex \"evtType\":"\"(?<evt_type>\w+)"\... See more...
index="ss-stg-dkp" cluster_name="*" AND namespace=dcx AND (label_app="composite-*" ) sourcetype="kube:container:main" | rex \"status\":"(?<Http_code>\d+)" | rex \"evtType\":"\"(?<evt_type>\w+)"\" |search evt_type=REQUEST| stats count(eval(Http_code>0)) as "Totalhits" count(eval(Http_code <500)) as "sR"| append [ search index="ss-stg-dkp" cluster_name="*" AND namespace=dcx AND (label_app="composite-*" ) sourcetype="kube:container:main"| rex field=_raw "Status code:"\s(?<code>\d+) |stats count(eval(code =500)) as error]   Hi All I want to add error count in to Totalhits like eval TotalRequest = error+TotalHits It is showing as null value. Please help me to achieve this
What we did was : Restored 2 old peer nodes from a backup Cloned the master node to setup a shadow cluster and adapted the replication-factor on this clone to 2. This allowed us to make a mini-cl... See more...
What we did was : Restored 2 old peer nodes from a backup Cloned the master node to setup a shadow cluster and adapted the replication-factor on this clone to 2. This allowed us to make a mini-cluster which is fully balanced (so both restored peer nodes would have all data) I did however noticed that on one of the two recoved nodes the colddb-location remained empty. Placed the shadow-cluster in maintenance and removed one of the peer nodes. Reconfigured this peer to connect to the production cluster Also changed the name in the server.conf and removed the instance.cfg to prevent duplicate peer names and UUID's When I check the "Settings / Indexer Clustering" page on the master it does show the recovered node as well. The "Indexes" tab on this same page shows all indexes are green. But... when I do a search for the earliestTime, the older data which is on the recovered peer is not seen. Only when I add the recovered peer to the distSearch.conf it does see the older events. Also when I remove the recovered peer again from the cluster the older events are also gone again, which indicates those cold buckets were not synced to the production nodes. The buckets are not rolled to frozen, because the frozenTimePeriodInSecs for the index is set to 157248000 (about 5 years) and the data I try to recover is from 2020. And I did just run a dbinspect and it seems not to give any errors on the cold buckets on the restored host. Path is the colddb-path and state is 'cold' as expected   Eventually I would like to remove the recovered peer again from the cluster, since this is still running RHEL7 and it has to be switched off.. So... I am looking for a way to safely get the data on the RHEL9 nodes. And as a side-track I want to get the understanding of how the warm/cold buckets are handled. Because... when they are indeed not replicated it also explains why they were lost in the first place... the RHEL9 nodes were clean installations which replaced the RHEL7 nodes. The rough procedure followed in this migration was : Add an additional "overflow" peer to the cluster and make sure the cluster is synced. Bring down (offline --enforce-counts) one of the RHEL7 nodes and replace it with a clean RHEL9 node. Config from /opt/splunk/etc was taken over from the old RHEL7 node When all nodes were replace, the "overflow" node was removed. So, when cold buckets were not replicated, the were never replicated to the overflow node and eventually were all gone..
Still it find me difficult to understand logic of joining two indexes. Below the query which is almost suits my needs ... ALMOST index="odp" OR index="oap" txt2="ibum_p" | rename e as c_e | eval ... See more...
Still it find me difficult to understand logic of joining two indexes. Below the query which is almost suits my needs ... ALMOST index="odp" OR index="oap" txt2="ibum_p" | rename e as c_e | eval c_e = mvindex(split(c_e, ","), 0) | stats values(*) by c_e line 1 - two indexes joined and one of them filtered ( to create OneToOne relation). line 2&3 - rename and modification of key column in second index to make it identical as in the first index line 4 - show all columns Result contains 400 records - same as each index separately. But result shows only columns from second index . I supposed values(*) means all columns from all indexes. I tried to type each column separately but it does not change anything - still columns from first index are empty - WHY?? If I succeed this milestone   I will start aggregations   Any hints ?
@PickleRick wrote: You're not trying to debug development stuff on a production environment, are you? Heh, atm i'am not. But how do you debug an application that depends on Splunk Enterprise ... See more...
@PickleRick wrote: You're not trying to debug development stuff on a production environment, are you? Heh, atm i'am not. But how do you debug an application that depends on Splunk Enterprise Security without yet another license?
@bowesmana wrote: What are you changing, by custom reporting commands do you mean you've written some python extension? You don't need to restart Splunk generally, but depends what you have chan... See more...
@bowesmana wrote: What are you changing, by custom reporting commands do you mean you've written some python extension? You don't need to restart Splunk generally, but depends what you have changed   From this app structure (https://dev.splunk.com/enterprise/docs/developapps/createapps/appanatomy/), i'am talking about changing the python code in bin/command.py Yes, we are using on-prem solution. Thanks, a'll check this app.
In Splunk , sedcmd works on _raw. There is no option to apply it on a specific field. Temporary solution : When a Field value is passed as string format instead of list in a json file Search Time e... See more...
In Splunk , sedcmd works on _raw. There is no option to apply it on a specific field. Temporary solution : When a Field value is passed as string format instead of list in a json file Search Time extraction : | rex mode=sed "s/(\"Data\":\s+)\"/\1[/g s/(\"Data\":\s+\[{.*})\"/\1]/g s/\\\\\"/\"/g" | extract pairdelim="\"{,}" kvdelim=":"   Index Time extraction : SEDCMD-o365DataJsonRemoveBackSlash = s/(\\)+"/"/g s/(\"Data\":\s+)\"/\1[/g s/(\"Data\":\s+\[{.*})\"/\1]/g
Hi @gcusello  With the updated query , i am not able to fetch the data of the current date.  Can you please help me to add the data of the current date too.  Query:  index=events_prod_cdp_penalty... See more...
Hi @gcusello  With the updated query , i am not able to fetch the data of the current date.  Can you please help me to add the data of the current date too.  Query:  index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | stats sum(eventcount) AS eventcount BY _time file | append [ search index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="PIDZJEA" endswith="IDJO20P" keeporphans=True | bin span=1d _time | stats sum(eventcount) AS eventcount BY _time | eval file="count after PIDZJEA" | table file eventcount _time] | chart sum(eventcount) AS eventcount OVER _time BY file   Extract :     Also , is it possible to have a visual graph like below to show the details in the graph :  IN_per_24h = count of RPWARDA between IDJO20P and PIDZJEA of the day.  Out_per_24h =  count of SPWARAA + SPWARRA between IDJO20P and PIDZJEA of the day.  Backlog = count after PIDZJEA  of the day.     
Again - what do you mean by "as long as events are present"? How should Splunk know that the events are from two separate sessions? That's not me nitpicking - that's a question about how to build suc... See more...
Again - what do you mean by "as long as events are present"? How should Splunk know that the events are from two separate sessions? That's not me nitpicking - that's a question about how to build such search.
Adding to @bowesmana 's answer - you're not trying to debug development stuff on a production environment, are you? Dev environments typically restart relatively quickly since they don't hold much da... See more...
Adding to @bowesmana 's answer - you're not trying to debug development stuff on a production environment, are you? Dev environments typically restart relatively quickly since they don't hold much data. And you don't have to restart Splunk every time you change something. Just when you change things that require restart. I'd hazard a guess that for search-time command it should be enough to to /debug/refresh
ENOTENOUGHINFO What exactly did you do? Did you just spin up an instance restored from snapshot/backup? Did you add it to the cluster? Does the CM see it? Do you see the buckets at all? Haven't they... See more...
ENOTENOUGHINFO What exactly did you do? Did you just spin up an instance restored from snapshot/backup? Did you add it to the cluster? Does the CM see it? Do you see the buckets at all? Haven't they rolled to frozen yet on other nodes? What does the dbinspect say?
What are you changing, by custom reporting commands do you mean you've written some python extension? You don't need to restart Splunk generally, but depends what you have changed Are you running o... See more...
What are you changing, by custom reporting commands do you mean you've written some python extension? You don't need to restart Splunk generally, but depends what you have changed Are you running on-prem - if so, highly recommend this app https://splunkbase.splunk.com/app/4353 If you are changing Javascript, you can run the bump command https://hostname/en-GB/_bump or there is the refresh option https://hostname/en-GB/debug/refresh depending on whether you can access these.
Hello everyone, I'm new to Splunk and I have a question: is it possible to update the custom reporting command code without restarting Splunk? "After modifying configuration files on disk, you need ... See more...
Hello everyone, I'm new to Splunk and I have a question: is it possible to update the custom reporting command code without restarting Splunk? "After modifying configuration files on disk, you need to restart Splunk Enterprise. This step is required for your updates to take effect. For information on how to restart Splunk Enterprise, see Start and stop Splunk Enterprise in the Splunk Enterprise Admin Manual." I mean... How can I debug my app if I have to reload the Splunk every time I changed something?
I posted an edit to clarify what i have found so far. Sorry for not doing this earlier. Depending on how old your forwarder was before upgrade, remember that the direct upgrade to forwarder 9+ is on... See more...
I posted an edit to clarify what i have found so far. Sorry for not doing this earlier. Depending on how old your forwarder was before upgrade, remember that the direct upgrade to forwarder 9+ is only supported from 8.1.x and higher.  That said it I don't think we have seen the end of this yet
There are many simple solution our there and there are some Apps and sophisticated solutions which makes use of KVstore to keep track of delayed events and other stuff, but I found them too complicat... See more...
There are many simple solution our there and there are some Apps and sophisticated solutions which makes use of KVstore to keep track of delayed events and other stuff, but I found them too complicated to use effectively across all the alerts. Here is the solution that I have been effectively using in many Splunk environments that I work on: If the events are not expected to be delayed much (example: UDP inputs, Windows inputs, File Monitoring) earliest=-5m@s latest=-1m@s earliest=-61m@m latest=-1m@m Usually any events could be delayed by few seconds for many different reasons, so I found safe to use latest time as 1 min before now. If the events are expected to be delayed by much more (example: python based inputs, custom Add-ons) earliest=-6h@h latest=+1h@h _index_earliest=-6m@s _index_latest=-1m@s Here I always prefer to use index-time as primary reference for few reasons: So alert triggers to nearby time when event appears in Splunk We don't miss any events We cover events even if it delayed few hours and more We also cover events if it contains future timestamp just in case We are also adding earliest and latest along with index-time search, because, Using all-time, makes search so much slower With earliest_time, you can add what you expect events to get delayed maximum amount of time With latest_time, you can add if you expect events to come with future time-stamp.   Please let me know if I'm missing any scenarios. Or paste any other solution that you have for other users on the community.
How to best choose time-range to handle the delayed events for Splunk alerts to ensure that no events got skipped and no events are repeated effectively.
Recently we replace our RedHat 7 peers with new RedHat 9 peers and it seems we lost some data in the process... Looking at the storage, it almost seems like we lost the cold buckets (and maybe also ... See more...
Recently we replace our RedHat 7 peers with new RedHat 9 peers and it seems we lost some data in the process... Looking at the storage, it almost seems like we lost the cold buckets (and maybe also the warm ones). We managed to restore a backup of one of the old RHEL7 peers and we connected this to the cluster, but it looks like it's not replicating the cold buckets to the RHEL9 peers.. We are not using smart storage, the cold buckets are in fact just stored in another subdir under the $SPLUNK_DB path. So.. the question rises... are warm and cold buckets replicated ? Our replication factor is set to 3 and I added a single restored peer to a 4-peer cluster If there is no automated way of replicating the cold buckets... can I safely copy them from the RHEL7 node to the RHEL9 nodes ? (e.g. via scp)