All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Firstly check whether this pre-built app for Commvault meets your specific needs, and if so, then follow the installation and configuration steps mentioned in the doc: https://splunkbase.splunk.com/... See more...
Firstly check whether this pre-built app for Commvault meets your specific needs, and if so, then follow the installation and configuration steps mentioned in the doc: https://splunkbase.splunk.com/app/5718 
Hi @abhaywdc there are a few ways to do this. Here's a way to do this using props.conf/transforms.conf: props.conf:   ... TRANSFORMS-removeDupe=removeDupe   transforms.conf:   [removeDupe] R... See more...
Hi @abhaywdc there are a few ways to do this. Here's a way to do this using props.conf/transforms.conf: props.conf:   ... TRANSFORMS-removeDupe=removeDupe   transforms.conf:   [removeDupe] REGEX = (?s)(.*?)((but[\r\n]+)+)(.*) FORMAT = $1$3$4 DEST_KEY = _raw   This transform tells Splunk to replace all the instances of "but" with the last instance, thereby de-duplicating them Explanation of the regex from regexr:    
So the dashboard has 2 visible panels A and C which are shown. Panel B is hidden. So, when I use the default export to pdf it will only show panels A and C which works as intended. Panel B itself is ... See more...
So the dashboard has 2 visible panels A and C which are shown. Panel B is hidden. So, when I use the default export to pdf it will only show panels A and C which works as intended. Panel B itself is a modal dialog box on top of the underlying dashboard that is also hidden by depends="$token$".  So ideally I want to adjust the export to pdf functionality to export panel B and not the whole dashboard. 
Panel B is part of dashboard X, but you say that the export works for dashboard X but not for panel B? When you say popup, do you mean a modal dialog box on top of the underlying dashboard or just a... See more...
Panel B is part of dashboard X, but you say that the export works for dashboard X but not for panel B? When you say popup, do you mean a modal dialog box on top of the underlying dashboard or just a panel hidden by depends="$token$". I expect it will not export a modal popup generated through JS.
Thank you for your supporting, Hmm, I ensure that all the samples in DatasetA is as the same as DatasetB. Therefore, i do not understand why: +DatasetA.action has values +DatasetA.DatasetB.action ... See more...
Thank you for your supporting, Hmm, I ensure that all the samples in DatasetA is as the same as DatasetB. Therefore, i do not understand why: +DatasetA.action has values +DatasetA.DatasetB.action does not have values Not only for field "action", all the field after ".DatasetB" do not have values. Eventhough DatasetB is inherited from DatasetA ? May be something wrong in setting datamodel?
I have a dashboard X consisting of multiple panels (A, B, C) each populated with dynamic tokens. Panel A consists of tabular data. When a user clicks on a cell, this will register table data as token... See more...
I have a dashboard X consisting of multiple panels (A, B, C) each populated with dynamic tokens. Panel A consists of tabular data. When a user clicks on a cell, this will register table data as tokens. When the token value changes, this will trigger a JavaScript which "activates" panel B which is originally hidden. This will then create a popup consisting of Panel B that is populated with data passed from tokens from panel A.  Splunk has a default Export to PDF functionality. I know it uses the pdfgen_endpoint.py but how does clicking this button trigger the python script? Currently this functionality works for exporting dashboard X. How do I make adjustments so it can also work for panel B? /splunkd/__raw/services/pdfgen/render PDF endpoint must be called with one of the following args: 'input-dashboard=<dashboard-id>' or 'input-report=<report-id>' or 'input-dashboard-xml=<dashboard-xml>' but if I try to parse the XML it requires all token values to be resolved.  Please assist.  
No results after executing the query. There is a lookup file called "bd_users_hierarchy.csv" which contains Active Directory users and "mapr_ticket_contacts.csv " where in UseCase information exists.... See more...
No results after executing the query. There is a lookup file called "bd_users_hierarchy.csv" which contains Active Directory users and "mapr_ticket_contacts.csv " where in UseCase information exists. Please check below screenshot and query i have written to find out Top CPU Users and Usecases on all edge nodes.   In the inputlookup file called ""mapr_ticket_contacts.csv", Usecases ends with letter "s,q,g,p" need to trim down and get email addresses. For example If i remove the letter "p"   Edge Node Information  --- Edge_Nodes_All.csv Active Directory Users  --- bd_users_hierarchy.csv UseCases -- mapr_ticket_contacts.csv ( Need to trim down letter "s,q,g,p")   I have tried with the below splunk query, but not getting results index=imdc_*_os sourcetype=ps1 [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |fields cluster, host, user, total_cpu | join type=inner host [search `gold_mpstat` OR `silver_mpstat` OR `platinum_mpstat` OR `palladium_mpstat` [|inputlookup Edge_Nodes_All.csv where Environment="*" AND host="*" |fields host] |stats max(eval(id+1)) as cores by host] |eval pct_CPU = round(total_cpu/cores,2) |stats max(total_cpu) as total_cpu, max(pct_CPU) as "CPU %" by user,host,cores |table host user cores total_cpu,"CPU %" | search NOT user IN ("root","imdcsup","hadpsup") |sort - "CPU %"|head 10 | join type=left user [| inputlookup bd_users_hierarchy.csv| rename email as user_email | table user,user_email] | join type=left user [| inputlookup mapr_ticket_contacts.csv | eventstats max(Modified_Time) as Modified_Time_max by UseCase | where Modified_Time=Modified_Time_max | eval Modified_Time=if(Modified_Time=0,"Not Updated",strftime(Modified_Time,"%Y-%m-%d %H:%M")) | rename Updated_By as "Last_Updated_By",Modified_Time as "Last_Modified_Time" | rex field=UseCase "(?<UseCase>.*)."   | rename UseCase as user | rename Support_Team_DL as user_email | table user,user_email] Appreciate your quick response on the same.  
And you can also add a <change> element in the multiselect, which although officially unsupported, does work, i.e. this <change> <eval token="selections">mvcount($form.element$)</eval>... See more...
And you can also add a <change> element in the multiselect, which although officially unsupported, does work, i.e. this <change> <eval token="selections">mvcount($form.element$)</eval> </change> Note that you don't need the split here as the $form.element$ is only flattened in the token assignment in the SPL
As @ITWhisperer , you should use $form.element$ - the $form.element$ variant of the token is the one that holds the values of the selections, whereas the base $element$ holds the final full expanded ... See more...
As @ITWhisperer , you should use $form.element$ - the $form.element$ variant of the token is the one that holds the values of the selections, whereas the base $element$ holds the final full expanded token with all the prefixes, suffixed and delimiter, see your slightly modified example. <form version="1.1" theme="light"> <label>test2</label> <fieldset submitButton="false"> <input type="multiselect" token="element" searchWhenChanged="true"> <label>Fruit Select</label> <choice value="a">Apple</choice> <choice value="b">Banana</choice> <choice value="c">Coconut</choice> <choice value="d">Dragonfruit</choice> <choice value="e">Elderberry</choice> <choice value="f">Fig</choice> <choice value="g">Grape</choice> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>, </delimiter> </input> </fieldset> <row> <panel> <title>Form element::$form.element$, Element::$element$</title> <single> <title>Number of selected fruit</title> <search> <query>| makeresults | eval selected_total=mvcount(split($form.element|s$,",")) | table selected_total</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> </form> I am not sure if the| eval selected_total=mvcount(split($form.element|s$,",")) would work also in dashboard studio also.  
After I "discovered" MAX_EVENTS from solving Why are REST API receivers/simple breaks input unexpectedly? I thought that gave me key to this problem as well, especially confirming that some events I ... See more...
After I "discovered" MAX_EVENTS from solving Why are REST API receivers/simple breaks input unexpectedly? I thought that gave me key to this problem as well, especially confirming that some events I knew got cutoff indeed had > 256 "lines".  Alas, that was not to be. Nevertheless, I finally find the fix and the key is still in props.conf and still explained in Line breaking.   TRUNCATE = <non-negative integer> * The default maximum line length, in bytes. * Although this is in bytes, line length is rounded down when this would otherwise land mid-character for multi-byte characters. * Set to 0 if you never want truncation (very long lines are, however, often a sign of garbage data). * Default: 10000   It turns out that those events were larger than 10,000 bytes!  In short, I previously focused too much on column and forgot to check total event size, blindly trusting that a row in CSV cannot be that long. (That the CSV contains multi-line columns makes the assessment more difficult.) This problem has nothing to do with CSV format as the title of the post implies.  Similar to REST API being a red herring in my other problem, CSV is a red herring here. Like line numbers in that other trouble, limit on total event size is in props.conf, not limit.conf. Even though some of these events do contain > 256 lines, MAX_EVENTS has no effect one way or another when INDEXED_EXTRACTIONS = csv is in place. Change to TRUNCATE can be set per sourcetype from Splunk Web.  No restart needed. For anyone who sees this in the future, the final clue came from this warning in Event Preview while using Splunk Web Upload: Final triage came upon me when I extracted those failing events in a test file and saw that they didn't trigger this warning in Instance 2.  Following a clue given by inventsekar in https://community.splunk.com/t5/Getting-Data-In/How-can-we-avoid-the-line-truncating-warning/m-p/370655 to examine that "good" instance, TRUNCATE override was shown in localized system props.conf!  (I couldn't find any notes in my previous work that indicated this change.) But to make things more interesting, you may not be able to see that warning if the absolute majority of events do not exceed default TRUNCATE value.  This lack of warning really blindsided my diagnosis.
First question - is the output a single row or are there multiple rows expected, in which case, what is the entity that separates the rows - is it REFERENCE_VAL and if so, how does one correlate REFE... See more...
First question - is the output a single row or are there multiple rows expected, in which case, what is the entity that separates the rows - is it REFERENCE_VAL and if so, how does one correlate REFERENCE_VAL to RELATED_VAL? This is the ONE row solution index=someIndex searchString OR someSearchString | rex field=_raw "stuff(?<REFERENCE_VAL>)$" | rex field=_raw "stuff(?<RELATED_VAL>)$" | stats min(eval(if(isnotnull(REFERENCE_VAL), _time, null()))) as EVENT_TIME min(eval(if(isnotnull(RELATED_VAL), _time, null()))) as RELATED_TIME | eval timeBand=RELATED_TIME-EVENT_TIME | where abs(timeBand)<2000 which will only give a result if the time range is less than 2 seconds, but I suspect you are expecting more than one row...
I'm still seeing this behavoir in an upgrade to 9.2.1 (via rpm). This system was running an older version of splunk.  the "splunk" user did exist, and logs were showing up in the indexers/web-search... See more...
I'm still seeing this behavoir in an upgrade to 9.2.1 (via rpm). This system was running an older version of splunk.  the "splunk" user did exist, and logs were showing up in the indexers/web-search.   the service is running via systemd. upon starting it chowned everything to the wrong user (splunkfwd) and it couldn't access its config and exited. lol.  please splunk, do not force user names or groups names and don't change them during an update!  It is not the (unix) way.   (don't get me started about the main splunk process being able to modifiy its own config and binaries and execute the altered binaries. that just isn't safe.) I reverted to a snapshot.  at least splunk runs and logs again.  Unfortunately, this is a compliance failure at modern companies. > Now tell me again why this stunt was necessary.
From the documentation, I believe that the Task Server should start after I set up the JAVA_HOME, but it has been failing to start, with only a message "Failed to restart task server." I am running ... See more...
From the documentation, I believe that the Task Server should start after I set up the JAVA_HOME, but it has been failing to start, with only a message "Failed to restart task server." I am running an instance of Splunk version 8.1.5. When installing DBx 3.17.2, I installed OpenJDK 8, DBx started that it required Java 11. So I installed java-11-openjdk version 11.0.23.0.9.  Task Server JVM Options were automatically set to "-Ddw.server.applicationConnectors[0].port=9998". Is there anything else missing?   Is there a way to debug this issue? I looked into the internal logs from this host but have not been able to find anything that stands out.   Thanks for any insights and thoughts.
Hello SPLUNK Community! There  are clear instructions on how to import services from a  CSV file in ITSI.  However I can't find a way to export the same data into a CSV file.   How can I export ser... See more...
Hello SPLUNK Community! There  are clear instructions on how to import services from a  CSV file in ITSI.  However I can't find a way to export the same data into a CSV file.   How can I export services dependencies from ITSI? Thanks.
Both HEC and the UF support ack.  While HEC does support higher volume, but both have good throughput.  We'd need to know more about how much data you intend to send to determine which is better. Th... See more...
Both HEC and the UF support ack.  While HEC does support higher volume, but both have good throughput.  We'd need to know more about how much data you intend to send to determine which is better. The data send to HEC has to be in a particular format and ACKs must be checked periodically, so there must be a client that has to be maintained by the customer. There is no additional cost (from Splunk) for either approach. Yes, you will want an add-on, especially if you use the UF (but may also be needed for HEC).  The add-on ensures the data is onboarded properly and defines the fields to be extracted.
I believe this was a misunderstanding on my part on how the episode views work.  The "Events Timeline" screen looks like I would expect, with one alert and the timeline shows it was red, then moved t... See more...
I believe this was a misunderstanding on my part on how the episode views work.  The "Events Timeline" screen looks like I would expect, with one alert and the timeline shows it was red, then moved to green.  The "All Events" view appears to be a running list of all events that drive state changes.
"Find event in one search, get related events by time in another search" Found some related questions but could not formulate a working solution from them....  Of course this doesn't work, but maybe... See more...
"Find event in one search, get related events by time in another search" Found some related questions but could not formulate a working solution from them....  Of course this doesn't work, but maybe it will make clear what is wanted, values in 2nd search events within milliseconds (2000 shown) of first search's event....     index=someIndex searchString | rex field=_raw "stuff(?<REFERENCE_VAL>)$" | stats _time as EVENT_TIME | append (search index=anIndex someSearchString | rex field=_raw "stuff(?<RELATED_VAL>)$" | eval timeBand=_time-EVENT_TIME | where abs(timeBand)<2000 | stats _time as RELATED_TIME) | table EVENT_TIME REFERENCE_VAL RELATED_TIME RELATED_VAL    
@bowesmana Ill test this out and report back. If I can pass the captured variables, it should work.   Search filters on roles might be a bit too limiting, though admittedly I'm not sure.  Most user... See more...
@bowesmana Ill test this out and report back. If I can pass the captured variables, it should work.   Search filters on roles might be a bit too limiting, though admittedly I'm not sure.  Most users with access to splunk already have roles, so unless the search filter would apply only to the indexes in the new role (IE users with Role A have access to index A and Role B have access to filtered search index B) it might not work for me.    
Did this work? Did you discover that you had to implement additional steps to make it work?   Thanks, Farhan
Yes, it is possible and done often.  It requires Professional Services, though.