All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As longs as events are present then the user is logged in, my goal is to calculate total time where there are events
Thanks @bowesmana @ITWhisperer 
Hi , I have placed both the transforms and props at indexer layer. We are getting the CSV data through UF's
I tried the regex and it did not work
I think you are looking for map. index=someIndex searchString | rex field=_raw "stuff(?<REFERENCE_VAL>somestuff)$" | rename _time as EVENT_TIME | eval start = EVENT_TIME - 1, end = EVENT_TIME + 1 | ... See more...
I think you are looking for map. index=someIndex searchString | rex field=_raw "stuff(?<REFERENCE_VAL>somestuff)$" | rename _time as EVENT_TIME | eval start = EVENT_TIME - 1, end = EVENT_TIME + 1 | map maxsearches=1000 search="index=anIndex someSearchString earliest=$start$ latest=$end$ | rex field=_raw "stuff(?<RELATED_VAL>otherstuff)$" | rename _time as RELATED_TIME | fields RELATED_*" | table EVENT_TIME REFERENCE_VAL RELATED_TIME RELATED_VAL Caveats: When there are many events in main search, it can be very, very expensive. You need to give a number to maxsearches; it cannot be 0. (See documentation for more limitations.) If you are using [-1000ms, + 1000ms], chances are strong that all these start-end pairs will overlap badly, rendering your question itself rather meaningless.  You can develop algorithms to merge these overlaps to make map command more efficient (by reducing intervals).  But you need to ask yourself (or your boss) seriously: Is this a well-posed question?  
Hi, how can write to app.conf file in splunk using python. i am able to read the file using splunk.clilib but not sure sure how to write into it. [stanza_name] name=abcde   how can i add a new ... See more...
Hi, how can write to app.conf file in splunk using python. i am able to read the file using splunk.clilib but not sure sure how to write into it. [stanza_name] name=abcde   how can i add a new entry or update the existing one. please help.   Thanks
Hi @Poojitha following the example from the documentation on spath: https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Spath#3:_Extract_and_expand_JSON_events_with_multi-valued_fields... See more...
Hi @Poojitha following the example from the documentation on spath: https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Spath#3:_Extract_and_expand_JSON_events_with_multi-valued_fields  Here is a runanywhere example: | makeresults | eval _raw="{ \"Tag\": [ {\"Key\": \"app\", \"Value\": \"test_value\"}, {\"Key\": \"key1\", \"Value\": \"value1\"}, {\"Key\": \"key2\", \"Value\": \"value2\"}, {\"Key\": \"email\", \"Value\": \"test@abc.com\"}, ] } " | spath | rename Tag{}.Key as key, Tag{}.Value as value | eval x=mvzip(key,value) | mvexpand x | eval x=split(x,",") | eval key=mvindex(x,0) | eval value=mvindex(x,1) | table _time key value  
I need to extract the highlighted field in the below messege using regex... Not only do you not NEED to do this using regex, you MUST NOT use regex for this task.  As @ITWhisperer points out, yo... See more...
I need to extract the highlighted field in the below messege using regex... Not only do you not NEED to do this using regex, you MUST NOT use regex for this task.  As @ITWhisperer points out, your data is in JSON, a structured data.  Never treat structured data as plain text as @PickleRick points out. As @PickleRick notes, you can set KV_MODE = json in your sourcetype.  But even if you do not, Splunk should have already figured out and give you CrmId, status, source, etc.  Do you not get these field names and values? field name field value CrmId 11111111 SiteId xxxx applicationReceivedDate   assignmentStatus   assignmentStatusCode   c4cEventId   cancelReason   category Course Enquiry channelPartnerApplication no createdBy Technical User eventId   eventRegistrationId   eventTime 2024-06-24T06:15:42Z externalId   isFirstLead yes lastChangedBy Technical User leadId 22222222 leadSubAgentID   leaduuid 1234455 referredBy   referrerCounsellor   source Online Enquiry status Open studentCrmUuid 634543564 subCategory   Even if you do not for some oddball reason, using spath should suffice.  This is an example with spath using @ITWhisperer's makeresults emulation.   | makeresults | eval _raw="{ \"eventTime\": \"2024-06-24T06:15:42Z\", \"leaduuid\": \"1234455\", \"CrmId\": \"11111111\", \"studentCrmUuid\": \"634543564\", \"externalId\": \"\", \"SiteId\": \"xxxx\", \"subCategory\": \"\", \"category\": \"Course Enquiry\", \"eventId\": \"\", \"eventRegistrationId\": \"\", \"status\": \"Open\", \"source\": \"Online Enquiry\", \"leadId\": \"22222222\", \"assignmentStatusCode\": \"\", \"assignmentStatus\": \"\", \"isFirstLead\": \"yes\", \"c4cEventId\": \"\", \"channelPartnerApplication\": \"no\", \"applicationReceivedDate\": \"\", \"referredBy\": \"\", \"referrerCounsellor\": \"\", \"createdBy\": \"Technical User\", \"lastChangedBy\": \"Technical User\" , \"leadSubAgentID\": \"\", \"cancelReason\": \"\"}, \"offersInPrinciple\": {\"offersinPrinciple\": \"no\", \"oipReferenceNumber\": \"\", \"oipVerificationStatus\": \"\"}, \"qualification\": {\"qualification\": \"Unqualified\", \"primaryFinancialSource\": \"\"}, \"online\": {\"referringUrl\": \"\", \"idpNearestOffice\": \"\", \"sourceSiteId\": \"xxxxx\", \"preferredCounsellingMode\": \"\", \"institutionInfo\": \"\", \"courseName\": \"\", \"howDidYouHear\": \"Social Media\"}" ``` ITWhisperer's data emulation ``` | spath   It gives the above field names and values.
I have few questions that I want your support. Recently we migrated from distributed to clustered environment.  Not yet get familiar with cluster env.  1st  question: On the migrated standalone ... See more...
I have few questions that I want your support. Recently we migrated from distributed to clustered environment.  Not yet get familiar with cluster env.  1st  question: On the migrated standalone search head we required to run Splunk App for CEF to transform some events into CEF format prior to send them. For some reason, for Splunk App for CEF to work,  we unrestricted "unsupported hotlinked imports" on that standalone search head  in " Settings -> Server Settings -> Internal Library Settings". Unfortunately  after migration, on the cluster members, I can't find setting "Server Settings, Server Control, etc". 1. a: I am wondering if this is a normal behavior for cluster members, If yes how can I unrestrict "unsupported hotlinked imports". 1. b: Also I am wondering if there is no other way to transform into CEF format without using: "Splunk App for CEF" 2nd question: We are using one instance as Cluster manager and search head deployer, I am wondering if it's normal to see the search head deployer listed among the search heads. Thank you
Firstly check whether this pre-built app for Commvault meets your specific needs, and if so, then follow the installation and configuration steps mentioned in the doc: https://splunkbase.splunk.com/... See more...
Firstly check whether this pre-built app for Commvault meets your specific needs, and if so, then follow the installation and configuration steps mentioned in the doc: https://splunkbase.splunk.com/app/5718 
Hi @abhaywdc there are a few ways to do this. Here's a way to do this using props.conf/transforms.conf: props.conf:   ... TRANSFORMS-removeDupe=removeDupe   transforms.conf:   [removeDupe] R... See more...
Hi @abhaywdc there are a few ways to do this. Here's a way to do this using props.conf/transforms.conf: props.conf:   ... TRANSFORMS-removeDupe=removeDupe   transforms.conf:   [removeDupe] REGEX = (?s)(.*?)((but[\r\n]+)+)(.*) FORMAT = $1$3$4 DEST_KEY = _raw   This transform tells Splunk to replace all the instances of "but" with the last instance, thereby de-duplicating them Explanation of the regex from regexr:    
So the dashboard has 2 visible panels A and C which are shown. Panel B is hidden. So, when I use the default export to pdf it will only show panels A and C which works as intended. Panel B itself is ... See more...
So the dashboard has 2 visible panels A and C which are shown. Panel B is hidden. So, when I use the default export to pdf it will only show panels A and C which works as intended. Panel B itself is a modal dialog box on top of the underlying dashboard that is also hidden by depends="$token$".  So ideally I want to adjust the export to pdf functionality to export panel B and not the whole dashboard. 
Panel B is part of dashboard X, but you say that the export works for dashboard X but not for panel B? When you say popup, do you mean a modal dialog box on top of the underlying dashboard or just a... See more...
Panel B is part of dashboard X, but you say that the export works for dashboard X but not for panel B? When you say popup, do you mean a modal dialog box on top of the underlying dashboard or just a panel hidden by depends="$token$". I expect it will not export a modal popup generated through JS.
Thank you for your supporting, Hmm, I ensure that all the samples in DatasetA is as the same as DatasetB. Therefore, i do not understand why: +DatasetA.action has values +DatasetA.DatasetB.action ... See more...
Thank you for your supporting, Hmm, I ensure that all the samples in DatasetA is as the same as DatasetB. Therefore, i do not understand why: +DatasetA.action has values +DatasetA.DatasetB.action does not have values Not only for field "action", all the field after ".DatasetB" do not have values. Eventhough DatasetB is inherited from DatasetA ? May be something wrong in setting datamodel?
I have a dashboard X consisting of multiple panels (A, B, C) each populated with dynamic tokens. Panel A consists of tabular data. When a user clicks on a cell, this will register table data as token... See more...
I have a dashboard X consisting of multiple panels (A, B, C) each populated with dynamic tokens. Panel A consists of tabular data. When a user clicks on a cell, this will register table data as tokens. When the token value changes, this will trigger a JavaScript which "activates" panel B which is originally hidden. This will then create a popup consisting of Panel B that is populated with data passed from tokens from panel A.  Splunk has a default Export to PDF functionality. I know it uses the pdfgen_endpoint.py but how does clicking this button trigger the python script? Currently this functionality works for exporting dashboard X. How do I make adjustments so it can also work for panel B? /splunkd/__raw/services/pdfgen/render PDF endpoint must be called with one of the following args: 'input-dashboard=<dashboard-id>' or 'input-report=<report-id>' or 'input-dashboard-xml=<dashboard-xml>' but if I try to parse the XML it requires all token values to be resolved.  Please assist.  
No results after executing the query. There is a lookup file called "bd_users_hierarchy.csv" which contains Active Directory users and "mapr_ticket_contacts.csv " where in UseCase information exists.... See more...
No results after executing the query. There is a lookup file called "bd_users_hierarchy.csv" which contains Active Directory users and "mapr_ticket_contacts.csv " where in UseCase information exists. Please check below screenshot and query i have written to find out Top CPU Users and Usecases on all edge nodes.   In the inputlookup file called ""mapr_ticket_contacts.csv", Usecases ends with letter "s,q,g,p" need to trim down and get email addresses. For example If i remove the letter "p"   Edge Node Information  --- Edge_Nodes_All.csv Active Directory Users  --- bd_users_hierarchy.csv UseCases -- mapr_ticket_contacts.csv ( Need to trim down letter "s,q,g,p")   I have tried with the below splunk query, but not getting results index=imdc_*_os sourcetype=ps1 [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |fields cluster, host, user, total_cpu | join type=inner host [search `gold_mpstat` OR `silver_mpstat` OR `platinum_mpstat` OR `palladium_mpstat` [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |stats max(eval(id+1)) as cores by host] |eval pct_CPU = round(total_cpu/cores,2) |stats max(total_cpu) as total_cpu, max(pct_CPU) as "CPU %" by user,host,cores |table host user cores total_cpu,"CPU %" | search NOT user IN ("root","imdcsup","hadpsup") |sort - "CPU %"|head 10 | join type=left user [| inputlookup bd_users_hierarchy.csv| rename email as user_email | table user,user_email] | join type=left user [| inputlookup mapr_ticket_contacts.csv | eventstats max(Modified_Time) as Modified_Time_max by UseCase | where Modified_Time=Modified_Time_max | eval Modified_Time=if(Modified_Time=0,"Not Updated",strftime(Modified_Time,"%Y-%m-%d %H:%M")) | rename Updated_By as "Last_Updated_By",Modified_Time as "Last_Modified_Time" | rex field=UseCase "(?<UseCase>.*)."   | rename UseCase as user | rename Support_Team_DL as user_email | table user,user_email] Appreciate your quick response on the same.  
And you can also add a <change> element in the multiselect, which although officially unsupported, does work, i.e. this <change> <eval token="selections">mvcount($form.element$)</eval>... See more...
And you can also add a <change> element in the multiselect, which although officially unsupported, does work, i.e. this <change> <eval token="selections">mvcount($form.element$)</eval> </change> Note that you don't need the split here as the $form.element$ is only flattened in the token assignment in the SPL
As @ITWhisperer , you should use $form.element$ - the $form.element$ variant of the token is the one that holds the values of the selections, whereas the base $element$ holds the final full expanded ... See more...
As @ITWhisperer , you should use $form.element$ - the $form.element$ variant of the token is the one that holds the values of the selections, whereas the base $element$ holds the final full expanded token with all the prefixes, suffixed and delimiter, see your slightly modified example. <form version="1.1" theme="light"> <label>test2</label> <fieldset submitButton="false"> <input type="multiselect" token="element" searchWhenChanged="true"> <label>Fruit Select</label> <choice value="a">Apple</choice> <choice value="b">Banana</choice> <choice value="c">Coconut</choice> <choice value="d">Dragonfruit</choice> <choice value="e">Elderberry</choice> <choice value="f">Fig</choice> <choice value="g">Grape</choice> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>, </delimiter> </input> </fieldset> <row> <panel> <title>Form element::$form.element$, Element::$element$</title> <single> <title>Number of selected fruit</title> <search> <query>| makeresults | eval selected_total=mvcount(split($form.element|s$,",")) | table selected_total</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> </form> I am not sure if the| eval selected_total=mvcount(split($form.element|s$,",")) would work also in dashboard studio also.  
After I "discovered" MAX_EVENTS from solving Why are REST API receivers/simple breaks input unexpectedly? I thought that gave me key to this problem as well, especially confirming that some events I ... See more...
After I "discovered" MAX_EVENTS from solving Why are REST API receivers/simple breaks input unexpectedly? I thought that gave me key to this problem as well, especially confirming that some events I knew got cutoff indeed had > 256 "lines".  Alas, that was not to be. Nevertheless, I finally find the fix and the key is still in props.conf and still explained in Line breaking.   TRUNCATE = <non-negative integer> * The default maximum line length, in bytes. * Although this is in bytes, line length is rounded down when this would otherwise land mid-character for multi-byte characters. * Set to 0 if you never want truncation (very long lines are, however, often a sign of garbage data). * Default: 10000   It turns out that those events were larger than 10,000 bytes!  In short, I previously focused too much on column and forgot to check total event size, blindly trusting that a row in CSV cannot be that long. (That the CSV contains multi-line columns makes the assessment more difficult.) This problem has nothing to do with CSV format as the title of the post implies.  Similar to REST API being a red herring in my other problem, CSV is a red herring here. Like line numbers in that other trouble, limit on total event size is in props.conf, not limit.conf. Even though some of these events do contain > 256 lines, MAX_EVENTS has no effect one way or another when INDEXED_EXTRACTIONS = csv is in place. Change to TRUNCATE can be set per sourcetype from Splunk Web.  No restart needed. For anyone who sees this in the future, the final clue came from this warning in Event Preview while using Splunk Web Upload: Final triage came upon me when I extracted those failing events in a test file and saw that they didn't trigger this warning in Instance 2.  Following a clue given by inventsekar in https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-the-line-truncating-warning/m-p/370655 to examine that "good" instance, TRUNCATE override was shown in localized system props.conf!  (I couldn't find any notes in my previous work that indicated this change.) But to make things more interesting, you may not be able to see that warning if the absolute majority of events do not exceed default TRUNCATE value.  This lack of warning really blindsided my diagnosis.
First question - is the output a single row or are there multiple rows expected, in which case, what is the entity that separates the rows - is it REFERENCE_VAL and if so, how does one correlate REFE... See more...
First question - is the output a single row or are there multiple rows expected, in which case, what is the entity that separates the rows - is it REFERENCE_VAL and if so, how does one correlate REFERENCE_VAL to RELATED_VAL? This is the ONE row solution index=someIndex searchString OR someSearchString | rex field=_raw "stuff(?<REFERENCE_VAL>)$" | rex field=_raw "stuff(?<RELATED_VAL>)$" | stats min(eval(if(isnotnull(REFERENCE_VAL), _time, null()))) as EVENT_TIME min(eval(if(isnotnull(RELATED_VAL), _time, null()))) as RELATED_TIME | eval timeBand=RELATED_TIME-EVENT_TIME | where abs(timeBand)<2000 which will only give a result if the time range is less than 2 seconds, but I suspect you are expecting more than one row...