All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@bowesmana wrote: What are you changing, by custom reporting commands do you mean you've written some python extension? You don't need to restart Splunk generally, but depends what you have chan... See more...
@bowesmana wrote: What are you changing, by custom reporting commands do you mean you've written some python extension? You don't need to restart Splunk generally, but depends what you have changed   From this app structure (https://dev.splunk.com/enterprise/docs/developapps/createapps/appanatomy/), i'am talking about changing the python code in bin/command.py Yes, we are using on-prem solution. Thanks, a'll check this app.
In Splunk , sedcmd works on _raw. There is no option to apply it on a specific field. Temporary solution : When a Field value is passed as string format instead of list in a json file Search Time e... See more...
In Splunk , sedcmd works on _raw. There is no option to apply it on a specific field. Temporary solution : When a Field value is passed as string format instead of list in a json file Search Time extraction : | rex mode=sed "s/(\"Data\":\s+)\"/\1[/g s/(\"Data\":\s+\[{.*})\"/\1]/g s/\\\\\"/\"/g" | extract pairdelim="\"{,}" kvdelim=":"   Index Time extraction : SEDCMD-o365DataJsonRemoveBackSlash = s/(\\)+"/"/g s/(\"Data\":\s+)\"/\1[/g s/(\"Data\":\s+\[{.*})\"/\1]/g
Hi @gcusello  With the updated query , i am not able to fetch the data of the current date.  Can you please help me to add the data of the current date too.  Query:  index=events_prod_cdp_penalty... See more...
Hi @gcusello  With the updated query , i am not able to fetch the data of the current date.  Can you please help me to add the data of the current date too.  Query:  index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | stats sum(eventcount) AS eventcount BY _time file | append [ search index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="PIDZJEA" endswith="IDJO20P" keeporphans=True | bin span=1d _time | stats sum(eventcount) AS eventcount BY _time | eval file="count after PIDZJEA" | table file eventcount _time] | chart sum(eventcount) AS eventcount OVER _time BY file   Extract :     Also , is it possible to have a visual graph like below to show the details in the graph :  IN_per_24h = count of RPWARDA between IDJO20P and PIDZJEA of the day.  Out_per_24h =  count of SPWARAA + SPWARRA between IDJO20P and PIDZJEA of the day.  Backlog = count after PIDZJEA  of the day.     
Again - what do you mean by "as long as events are present"? How should Splunk know that the events are from two separate sessions? That's not me nitpicking - that's a question about how to build suc... See more...
Again - what do you mean by "as long as events are present"? How should Splunk know that the events are from two separate sessions? That's not me nitpicking - that's a question about how to build such search.
Adding to @bowesmana 's answer - you're not trying to debug development stuff on a production environment, are you? Dev environments typically restart relatively quickly since they don't hold much da... See more...
Adding to @bowesmana 's answer - you're not trying to debug development stuff on a production environment, are you? Dev environments typically restart relatively quickly since they don't hold much data. And you don't have to restart Splunk every time you change something. Just when you change things that require restart. I'd hazard a guess that for search-time command it should be enough to to /debug/refresh
ENOTENOUGHINFO What exactly did you do? Did you just spin up an instance restored from snapshot/backup? Did you add it to the cluster? Does the CM see it? Do you see the buckets at all? Haven't they... See more...
ENOTENOUGHINFO What exactly did you do? Did you just spin up an instance restored from snapshot/backup? Did you add it to the cluster? Does the CM see it? Do you see the buckets at all? Haven't they rolled to frozen yet on other nodes? What does the dbinspect say?
What are you changing, by custom reporting commands do you mean you've written some python extension? You don't need to restart Splunk generally, but depends what you have changed Are you running o... See more...
What are you changing, by custom reporting commands do you mean you've written some python extension? You don't need to restart Splunk generally, but depends what you have changed Are you running on-prem - if so, highly recommend this app https://splunkbase.splunk.com/app/4353 If you are changing Javascript, you can run the bump command https://hostname/en-GB/_bump or there is the refresh option https://hostname/en-GB/debug/refresh depending on whether you can access these.
Hello everyone, I'm new to Splunk and I have a question: is it possible to update the custom reporting command code without restarting Splunk? "After modifying configuration files on disk, you need ... See more...
Hello everyone, I'm new to Splunk and I have a question: is it possible to update the custom reporting command code without restarting Splunk? "After modifying configuration files on disk, you need to restart Splunk Enterprise. This step is required for your updates to take effect. For information on how to restart Splunk Enterprise, see Start and stop Splunk Enterprise in the Splunk Enterprise Admin Manual." I mean... How can I debug my app if I have to reload the Splunk every time I changed something?
I posted an edit to clarify what i have found so far. Sorry for not doing this earlier. Depending on how old your forwarder was before upgrade, remember that the direct upgrade to forwarder 9+ is on... See more...
I posted an edit to clarify what i have found so far. Sorry for not doing this earlier. Depending on how old your forwarder was before upgrade, remember that the direct upgrade to forwarder 9+ is only supported from 8.1.x and higher.  That said it I don't think we have seen the end of this yet
There are many simple solution our there and there are some Apps and sophisticated solutions which makes use of KVstore to keep track of delayed events and other stuff, but I found them too complicat... See more...
There are many simple solution our there and there are some Apps and sophisticated solutions which makes use of KVstore to keep track of delayed events and other stuff, but I found them too complicated to use effectively across all the alerts. Here is the solution that I have been effectively using in many Splunk environments that I work on: If the events are not expected to be delayed much (example: UDP inputs, Windows inputs, File Monitoring) earliest=-5m@s latest=-1m@s earliest=-61m@m latest=-1m@m Usually any events could be delayed by few seconds for many different reasons, so I found safe to use latest time as 1 min before now. If the events are expected to be delayed by much more (example: python based inputs, custom Add-ons) earliest=-6h@h latest=+1h@h _index_earliest=-6m@s _index_latest=-1m@s Here I always prefer to use index-time as primary reference for few reasons: So alert triggers to nearby time when event appears in Splunk We don't miss any events We cover events even if it delayed few hours and more We also cover events if it contains future timestamp just in case We are also adding earliest and latest along with index-time search, because, Using all-time, makes search so much slower With earliest_time, you can add what you expect events to get delayed maximum amount of time With latest_time, you can add if you expect events to come with future time-stamp.   Please let me know if I'm missing any scenarios. Or paste any other solution that you have for other users on the community.
How to best choose time-range to handle the delayed events for Splunk alerts to ensure that no events got skipped and no events are repeated effectively.
Recently we replace our RedHat 7 peers with new RedHat 9 peers and it seems we lost some data in the process... Looking at the storage, it almost seems like we lost the cold buckets (and maybe also ... See more...
Recently we replace our RedHat 7 peers with new RedHat 9 peers and it seems we lost some data in the process... Looking at the storage, it almost seems like we lost the cold buckets (and maybe also the warm ones). We managed to restore a backup of one of the old RHEL7 peers and we connected this to the cluster, but it looks like it's not replicating the cold buckets to the RHEL9 peers.. We are not using smart storage, the cold buckets are in fact just stored in another subdir under the $SPLUNK_DB path. So.. the question rises... are warm and cold buckets replicated ? Our replication factor is set to 3 and I added a single restored peer to a 4-peer cluster If there is no automated way of replicating the cold buckets... can I safely copy them from the RHEL7 node to the RHEL9 nodes ? (e.g. via scp)
As longs as events are present then the user is logged in, my goal is to calculate total time where there are events
Thanks @bowesmana @ITWhisperer 
Hi , I have placed both the transforms and props at indexer layer. We are getting the CSV data through UF's
I tried the regex and it did not work
I think you are looking for map. index=someIndex searchString | rex field=_raw "stuff(?<REFERENCE_VAL>somestuff)$" | rename _time as EVENT_TIME | eval start = EVENT_TIME - 1, end = EVENT_TIME + 1 | ... See more...
I think you are looking for map. index=someIndex searchString | rex field=_raw "stuff(?<REFERENCE_VAL>somestuff)$" | rename _time as EVENT_TIME | eval start = EVENT_TIME - 1, end = EVENT_TIME + 1 | map maxsearches=1000 search="index=anIndex someSearchString earliest=$start$ latest=$end$ | rex field=_raw "stuff(?<RELATED_VAL>otherstuff)$" | rename _time as RELATED_TIME | fields RELATED_*" | table EVENT_TIME REFERENCE_VAL RELATED_TIME RELATED_VAL Caveats: When there are many events in main search, it can be very, very expensive. You need to give a number to maxsearches; it cannot be 0. (See documentation for more limitations.) If you are using [-1000ms, + 1000ms], chances are strong that all these start-end pairs will overlap badly, rendering your question itself rather meaningless.  You can develop algorithms to merge these overlaps to make map command more efficient (by reducing intervals).  But you need to ask yourself (or your boss) seriously: Is this a well-posed question?  
Hi, how can write to app.conf file in splunk using python. i am able to read the file using splunk.clilib but not sure sure how to write into it. [stanza_name] name=abcde   how can i add a new ... See more...
Hi, how can write to app.conf file in splunk using python. i am able to read the file using splunk.clilib but not sure sure how to write into it. [stanza_name] name=abcde   how can i add a new entry or update the existing one. please help.   Thanks
Hi @Poojitha following the example from the documentation on spath: https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Spath#3:_Extract_and_expand_JSON_events_with_multi-valued_fields... See more...
Hi @Poojitha following the example from the documentation on spath: https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Spath#3:_Extract_and_expand_JSON_events_with_multi-valued_fields  Here is a runanywhere example: | makeresults | eval _raw="{ \"Tag\": [ {\"Key\": \"app\", \"Value\": \"test_value\"}, {\"Key\": \"key1\", \"Value\": \"value1\"}, {\"Key\": \"key2\", \"Value\": \"value2\"}, {\"Key\": \"email\", \"Value\": \"test@abc.com\"}, ] } " | spath | rename Tag{}.Key as key, Tag{}.Value as value | eval x=mvzip(key,value) | mvexpand x | eval x=split(x,",") | eval key=mvindex(x,0) | eval value=mvindex(x,1) | table _time key value  
I need to extract the highlighted field in the below messege using regex... Not only do you not NEED to do this using regex, you MUST NOT use regex for this task.  As @ITWhisperer points out, yo... See more...
I need to extract the highlighted field in the below messege using regex... Not only do you not NEED to do this using regex, you MUST NOT use regex for this task.  As @ITWhisperer points out, your data is in JSON, a structured data.  Never treat structured data as plain text as @PickleRick points out. As @PickleRick notes, you can set KV_MODE = json in your sourcetype.  But even if you do not, Splunk should have already figured out and give you CrmId, status, source, etc.  Do you not get these field names and values? field name field value CrmId 11111111 SiteId xxxx applicationReceivedDate   assignmentStatus   assignmentStatusCode   c4cEventId   cancelReason   category Course Enquiry channelPartnerApplication no createdBy Technical User eventId   eventRegistrationId   eventTime 2024-06-24T06:15:42Z externalId   isFirstLead yes lastChangedBy Technical User leadId 22222222 leadSubAgentID   leaduuid 1234455 referredBy   referrerCounsellor   source Online Enquiry status Open studentCrmUuid 634543564 subCategory   Even if you do not for some oddball reason, using spath should suffice.  This is an example with spath using @ITWhisperer's makeresults emulation.   | makeresults | eval _raw="{ \"eventTime\": \"2024-06-24T06:15:42Z\", \"leaduuid\": \"1234455\", \"CrmId\": \"11111111\", \"studentCrmUuid\": \"634543564\", \"externalId\": \"\", \"SiteId\": \"xxxx\", \"subCategory\": \"\", \"category\": \"Course Enquiry\", \"eventId\": \"\", \"eventRegistrationId\": \"\", \"status\": \"Open\", \"source\": \"Online Enquiry\", \"leadId\": \"22222222\", \"assignmentStatusCode\": \"\", \"assignmentStatus\": \"\", \"isFirstLead\": \"yes\", \"c4cEventId\": \"\", \"channelPartnerApplication\": \"no\", \"applicationReceivedDate\": \"\", \"referredBy\": \"\", \"referrerCounsellor\": \"\", \"createdBy\": \"Technical User\", \"lastChangedBy\": \"Technical User\" , \"leadSubAgentID\": \"\", \"cancelReason\": \"\"}, \"offersInPrinciple\": {\"offersinPrinciple\": \"no\", \"oipReferenceNumber\": \"\", \"oipVerificationStatus\": \"\"}, \"qualification\": {\"qualification\": \"Unqualified\", \"primaryFinancialSource\": \"\"}, \"online\": {\"referringUrl\": \"\", \"idpNearestOffice\": \"\", \"sourceSiteId\": \"xxxxx\", \"preferredCounsellingMode\": \"\", \"institutionInfo\": \"\", \"courseName\": \"\", \"howDidYouHear\": \"Social Media\"}" ``` ITWhisperer's data emulation ``` | spath   It gives the above field names and values.