All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I posted an edit to clarify what i have found so far. Sorry for not doing this earlier. Depending on how old your forwarder was before upgrade, remember that the direct upgrade to forwarder 9+ is on... See more...
I posted an edit to clarify what i have found so far. Sorry for not doing this earlier. Depending on how old your forwarder was before upgrade, remember that the direct upgrade to forwarder 9+ is only supported from 8.1.x and higher.  That said it I don't think we have seen the end of this yet
There are many simple solution our there and there are some Apps and sophisticated solutions which makes use of KVstore to keep track of delayed events and other stuff, but I found them too complicat... See more...
There are many simple solution our there and there are some Apps and sophisticated solutions which makes use of KVstore to keep track of delayed events and other stuff, but I found them too complicated to use effectively across all the alerts. Here is the solution that I have been effectively using in many Splunk environments that I work on: If the events are not expected to be delayed much (example: UDP inputs, Windows inputs, File Monitoring) earliest=-5m@s latest=-1m@s earliest=-61m@m latest=-1m@m Usually any events could be delayed by few seconds for many different reasons, so I found safe to use latest time as 1 min before now. If the events are expected to be delayed by much more (example: python based inputs, custom Add-ons) earliest=-6h@h latest=+1h@h _index_earliest=-6m@s _index_latest=-1m@s Here I always prefer to use index-time as primary reference for few reasons: So alert triggers to nearby time when event appears in Splunk We don't miss any events We cover events even if it delayed few hours and more We also cover events if it contains future timestamp just in case We are also adding earliest and latest along with index-time search, because, Using all-time, makes search so much slower With earliest_time, you can add what you expect events to get delayed maximum amount of time With latest_time, you can add if you expect events to come with future time-stamp.   Please let me know if I'm missing any scenarios. Or paste any other solution that you have for other users on the community.
How to best choose time-range to handle the delayed events for Splunk alerts to ensure that no events got skipped and no events are repeated effectively.
Recently we replace our RedHat 7 peers with new RedHat 9 peers and it seems we lost some data in the process... Looking at the storage, it almost seems like we lost the cold buckets (and maybe also ... See more...
Recently we replace our RedHat 7 peers with new RedHat 9 peers and it seems we lost some data in the process... Looking at the storage, it almost seems like we lost the cold buckets (and maybe also the warm ones). We managed to restore a backup of one of the old RHEL7 peers and we connected this to the cluster, but it looks like it's not replicating the cold buckets to the RHEL9 peers.. We are not using smart storage, the cold buckets are in fact just stored in another subdir under the $SPLUNK_DB path. So.. the question rises... are warm and cold buckets replicated ? Our replication factor is set to 3 and I added a single restored peer to a 4-peer cluster If there is no automated way of replicating the cold buckets... can I safely copy them from the RHEL7 node to the RHEL9 nodes ? (e.g. via scp)
As longs as events are present then the user is logged in, my goal is to calculate total time where there are events
Thanks @bowesmana @ITWhisperer 
Hi , I have placed both the transforms and props at indexer layer. We are getting the CSV data through UF's
I tried the regex and it did not work
I think you are looking for map. index=someIndex searchString | rex field=_raw "stuff(?<REFERENCE_VAL>somestuff)$" | rename _time as EVENT_TIME | eval start = EVENT_TIME - 1, end = EVENT_TIME + 1 | ... See more...
I think you are looking for map. index=someIndex searchString | rex field=_raw "stuff(?<REFERENCE_VAL>somestuff)$" | rename _time as EVENT_TIME | eval start = EVENT_TIME - 1, end = EVENT_TIME + 1 | map maxsearches=1000 search="index=anIndex someSearchString earliest=$start$ latest=$end$ | rex field=_raw "stuff(?<RELATED_VAL>otherstuff)$" | rename _time as RELATED_TIME | fields RELATED_*" | table EVENT_TIME REFERENCE_VAL RELATED_TIME RELATED_VAL Caveats: When there are many events in main search, it can be very, very expensive. You need to give a number to maxsearches; it cannot be 0. (See documentation for more limitations.) If you are using [-1000ms, + 1000ms], chances are strong that all these start-end pairs will overlap badly, rendering your question itself rather meaningless.  You can develop algorithms to merge these overlaps to make map command more efficient (by reducing intervals).  But you need to ask yourself (or your boss) seriously: Is this a well-posed question?  
Hi, how can write to app.conf file in splunk using python. i am able to read the file using splunk.clilib but not sure sure how to write into it. [stanza_name] name=abcde   how can i add a new ... See more...
Hi, how can write to app.conf file in splunk using python. i am able to read the file using splunk.clilib but not sure sure how to write into it. [stanza_name] name=abcde   how can i add a new entry or update the existing one. please help.   Thanks
Hi @Poojitha following the example from the documentation on spath: https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Spath#3:_Extract_and_expand_JSON_events_with_multi-valued_fields... See more...
Hi @Poojitha following the example from the documentation on spath: https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Spath#3:_Extract_and_expand_JSON_events_with_multi-valued_fields  Here is a runanywhere example: | makeresults | eval _raw="{ \"Tag\": [ {\"Key\": \"app\", \"Value\": \"test_value\"}, {\"Key\": \"key1\", \"Value\": \"value1\"}, {\"Key\": \"key2\", \"Value\": \"value2\"}, {\"Key\": \"email\", \"Value\": \"test@abc.com\"}, ] } " | spath | rename Tag{}.Key as key, Tag{}.Value as value | eval x=mvzip(key,value) | mvexpand x | eval x=split(x,",") | eval key=mvindex(x,0) | eval value=mvindex(x,1) | table _time key value  
I need to extract the highlighted field in the below messege using regex... Not only do you not NEED to do this using regex, you MUST NOT use regex for this task.  As @ITWhisperer points out, yo... See more...
I need to extract the highlighted field in the below messege using regex... Not only do you not NEED to do this using regex, you MUST NOT use regex for this task.  As @ITWhisperer points out, your data is in JSON, a structured data.  Never treat structured data as plain text as @PickleRick points out. As @PickleRick notes, you can set KV_MODE = json in your sourcetype.  But even if you do not, Splunk should have already figured out and give you CrmId, status, source, etc.  Do you not get these field names and values? field name field value CrmId 11111111 SiteId xxxx applicationReceivedDate   assignmentStatus   assignmentStatusCode   c4cEventId   cancelReason   category Course Enquiry channelPartnerApplication no createdBy Technical User eventId   eventRegistrationId   eventTime 2024-06-24T06:15:42Z externalId   isFirstLead yes lastChangedBy Technical User leadId 22222222 leadSubAgentID   leaduuid 1234455 referredBy   referrerCounsellor   source Online Enquiry status Open studentCrmUuid 634543564 subCategory   Even if you do not for some oddball reason, using spath should suffice.  This is an example with spath using @ITWhisperer's makeresults emulation.   | makeresults | eval _raw="{ \"eventTime\": \"2024-06-24T06:15:42Z\", \"leaduuid\": \"1234455\", \"CrmId\": \"11111111\", \"studentCrmUuid\": \"634543564\", \"externalId\": \"\", \"SiteId\": \"xxxx\", \"subCategory\": \"\", \"category\": \"Course Enquiry\", \"eventId\": \"\", \"eventRegistrationId\": \"\", \"status\": \"Open\", \"source\": \"Online Enquiry\", \"leadId\": \"22222222\", \"assignmentStatusCode\": \"\", \"assignmentStatus\": \"\", \"isFirstLead\": \"yes\", \"c4cEventId\": \"\", \"channelPartnerApplication\": \"no\", \"applicationReceivedDate\": \"\", \"referredBy\": \"\", \"referrerCounsellor\": \"\", \"createdBy\": \"Technical User\", \"lastChangedBy\": \"Technical User\" , \"leadSubAgentID\": \"\", \"cancelReason\": \"\"}, \"offersInPrinciple\": {\"offersinPrinciple\": \"no\", \"oipReferenceNumber\": \"\", \"oipVerificationStatus\": \"\"}, \"qualification\": {\"qualification\": \"Unqualified\", \"primaryFinancialSource\": \"\"}, \"online\": {\"referringUrl\": \"\", \"idpNearestOffice\": \"\", \"sourceSiteId\": \"xxxxx\", \"preferredCounsellingMode\": \"\", \"institutionInfo\": \"\", \"courseName\": \"\", \"howDidYouHear\": \"Social Media\"}" ``` ITWhisperer's data emulation ``` | spath   It gives the above field names and values.
I have few questions that I want your support. Recently we migrated from distributed to clustered environment.  Not yet get familiar with cluster env.  1st  question: On the migrated standalone ... See more...
I have few questions that I want your support. Recently we migrated from distributed to clustered environment.  Not yet get familiar with cluster env.  1st  question: On the migrated standalone search head we required to run Splunk App for CEF to transform some events into CEF format prior to send them. For some reason, for Splunk App for CEF to work,  we unrestricted "unsupported hotlinked imports" on that standalone search head  in " Settings -> Server Settings -> Internal Library Settings". Unfortunately  after migration, on the cluster members, I can't find setting "Server Settings, Server Control, etc". 1. a: I am wondering if this is a normal behavior for cluster members, If yes how can I unrestrict "unsupported hotlinked imports". 1. b: Also I am wondering if there is no other way to transform into CEF format without using: "Splunk App for CEF" 2nd question: We are using one instance as Cluster manager and search head deployer, I am wondering if it's normal to see the search head deployer listed among the search heads. Thank you
Firstly check whether this pre-built app for Commvault meets your specific needs, and if so, then follow the installation and configuration steps mentioned in the doc: https://splunkbase.splunk.com/... See more...
Firstly check whether this pre-built app for Commvault meets your specific needs, and if so, then follow the installation and configuration steps mentioned in the doc: https://splunkbase.splunk.com/app/5718 
Hi @abhaywdc there are a few ways to do this. Here's a way to do this using props.conf/transforms.conf: props.conf:   ... TRANSFORMS-removeDupe=removeDupe   transforms.conf:   [removeDupe] R... See more...
Hi @abhaywdc there are a few ways to do this. Here's a way to do this using props.conf/transforms.conf: props.conf:   ... TRANSFORMS-removeDupe=removeDupe   transforms.conf:   [removeDupe] REGEX = (?s)(.*?)((but[\r\n]+)+)(.*) FORMAT = $1$3$4 DEST_KEY = _raw   This transform tells Splunk to replace all the instances of "but" with the last instance, thereby de-duplicating them Explanation of the regex from regexr:    
So the dashboard has 2 visible panels A and C which are shown. Panel B is hidden. So, when I use the default export to pdf it will only show panels A and C which works as intended. Panel B itself is ... See more...
So the dashboard has 2 visible panels A and C which are shown. Panel B is hidden. So, when I use the default export to pdf it will only show panels A and C which works as intended. Panel B itself is a modal dialog box on top of the underlying dashboard that is also hidden by depends="$token$".  So ideally I want to adjust the export to pdf functionality to export panel B and not the whole dashboard. 
Panel B is part of dashboard X, but you say that the export works for dashboard X but not for panel B? When you say popup, do you mean a modal dialog box on top of the underlying dashboard or just a... See more...
Panel B is part of dashboard X, but you say that the export works for dashboard X but not for panel B? When you say popup, do you mean a modal dialog box on top of the underlying dashboard or just a panel hidden by depends="$token$". I expect it will not export a modal popup generated through JS.
Thank you for your supporting, Hmm, I ensure that all the samples in DatasetA is as the same as DatasetB. Therefore, i do not understand why: +DatasetA.action has values +DatasetA.DatasetB.action ... See more...
Thank you for your supporting, Hmm, I ensure that all the samples in DatasetA is as the same as DatasetB. Therefore, i do not understand why: +DatasetA.action has values +DatasetA.DatasetB.action does not have values Not only for field "action", all the field after ".DatasetB" do not have values. Eventhough DatasetB is inherited from DatasetA ? May be something wrong in setting datamodel?
I have a dashboard X consisting of multiple panels (A, B, C) each populated with dynamic tokens. Panel A consists of tabular data. When a user clicks on a cell, this will register table data as token... See more...
I have a dashboard X consisting of multiple panels (A, B, C) each populated with dynamic tokens. Panel A consists of tabular data. When a user clicks on a cell, this will register table data as tokens. When the token value changes, this will trigger a JavaScript which "activates" panel B which is originally hidden. This will then create a popup consisting of Panel B that is populated with data passed from tokens from panel A.  Splunk has a default Export to PDF functionality. I know it uses the pdfgen_endpoint.py but how does clicking this button trigger the python script? Currently this functionality works for exporting dashboard X. How do I make adjustments so it can also work for panel B? /splunkd/__raw/services/pdfgen/render PDF endpoint must be called with one of the following args: 'input-dashboard=<dashboard-id>' or 'input-report=<report-id>' or 'input-dashboard-xml=<dashboard-xml>' but if I try to parse the XML it requires all token values to be resolved.  Please assist.  
No results after executing the query. There is a lookup file called "bd_users_hierarchy.csv" which contains Active Directory users and "mapr_ticket_contacts.csv " where in UseCase information exists.... See more...
No results after executing the query. There is a lookup file called "bd_users_hierarchy.csv" which contains Active Directory users and "mapr_ticket_contacts.csv " where in UseCase information exists. Please check below screenshot and query i have written to find out Top CPU Users and Usecases on all edge nodes.   In the inputlookup file called ""mapr_ticket_contacts.csv", Usecases ends with letter "s,q,g,p" need to trim down and get email addresses. For example If i remove the letter "p"   Edge Node Information  --- Edge_Nodes_All.csv Active Directory Users  --- bd_users_hierarchy.csv UseCases -- mapr_ticket_contacts.csv ( Need to trim down letter "s,q,g,p")   I have tried with the below splunk query, but not getting results index=imdc_*_os sourcetype=ps1 [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |fields cluster, host, user, total_cpu | join type=inner host [search `gold_mpstat` OR `silver_mpstat` OR `platinum_mpstat` OR `palladium_mpstat` [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |stats max(eval(id+1)) as cores by host] |eval pct_CPU = round(total_cpu/cores,2) |stats max(total_cpu) as total_cpu, max(pct_CPU) as "CPU %" by user,host,cores |table host user cores total_cpu,"CPU %" | search NOT user IN ("root","imdcsup","hadpsup") |sort - "CPU %"|head 10 | join type=left user [| inputlookup bd_users_hierarchy.csv| rename email as user_email | table user,user_email] | join type=left user [| inputlookup mapr_ticket_contacts.csv | eventstats max(Modified_Time) as Modified_Time_max by UseCase | where Modified_Time=Modified_Time_max | eval Modified_Time=if(Modified_Time=0,"Not Updated",strftime(Modified_Time,"%Y-%m-%d %H:%M")) | rename Updated_By as "Last_Updated_By",Modified_Time as "Last_Modified_Time" | rex field=UseCase "(?<UseCase>.*)."   | rename UseCase as user | rename Support_Team_DL as user_email | table user,user_email] Appreciate your quick response on the same.