All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The default value of the product selection should be 'latest'. The token for the default value is determined by a hidden search for the latest product. This is dependent on the selected device. If th... See more...
The default value of the product selection should be 'latest'. The token for the default value is determined by a hidden search for the latest product. This is dependent on the selected device. If the device selection changes, the product selection should revert to the default value, which is the latest product ID for the newly selected device. Currently, setting the latest product ID upon device change is not functioning. How can I resolve this issue?   <search id="base_search"> <query> | mpreview index="my_index" | search key IN $token_device$ </query> <earliest>$token_time.earliest$</earliest> <latest>$token_time.latest$</latest> <refresh>300</refresh> </search> <input id="select_device" type="dropdown" token="token_device" searchWhenChanged="true"> <label>Device</label> <selectFirstChoice>true</selectFirstChoice> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <search> <query> | mpreview index="my_index" | stats count by key | fields key | lookup device-mapping.csv ... | fields key full_name </query> </search> <fieldForLabel>full_name</fieldForLabel> <fieldForValue>key</fieldForValue> <delimiter>,</delimiter> <change> <unset token="token_product"></unset> <unset token="form.token_product"></unset> </change> </input> <search> <query> | mpreview index="my_index" | search key IN $token_device$ | stats latest(_time) as latest_time by product_id | sort -latest_time | head 1 | fields product_id </query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <condition match="$job.resultCount$ != 0"> <set token="latest_product_id">$result.product_id$</set> </condition> <condition match="$job.resultCount$ == 0"> <set token="latest_product_id">*</set> </condition> </done> </search> <input id="select_product" type="multiselect" token="token_product" searchWhenChanged="true"> <label>Product</label> <default>$latest_product_id$</default> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <choice value="*">All</choice> <search base="base_search"> <query> | stats latest(_time) as latest_time by product_id | eventstats max(latest_time) as max_time | eval label=if(latest_time == max_time, "latest", product_id) | sort - latest_time | fields label, product_id </query> </search> <fieldForLabel>label</fieldForLabel> <fieldForValue>product_id</fieldForValue> <delimiter>,</delimiter> <change> <condition label="All"> <set token="token_product">("*") AND product_id != "LoremIpsum"</set> </condition> </change> </input>  
Thanks! Works like a charm!
@sainag_splunk  tried and got this error [/visualizations/viz_JOPhfhQli/options/y2AxisScale]: must match pattern..... and many more.
It's a generating command so the SPL has to start with a pipe. | btool limits list
I've not done the cloud enough to know if this works, but on prem I would have done this. | rest splunk_server=local /servicesNS/-/-/configs/conf-limits search="eai:acl.app=*"
Hi  Thanks for the help, i have installed it, but am I missing something.  
Install the Admin's Little Helper app (https://splunkbase.splunk.com/app/6368).  It contains a 'btool' command that you can include in your SPL.
Hi It is possible to convert enterprise command line      bin/splunk btool limits list --app=MX.3_MONITORING_v3 --debug     To a rest command to be run from SPL in the cloud, please?   Thank... See more...
Hi It is possible to convert enterprise command line      bin/splunk btool limits list --app=MX.3_MONITORING_v3 --debug     To a rest command to be run from SPL in the cloud, please?   Thanks in advance
Thanks, this worked for me.
It's not about "whose is longer". And yes, I've seen many interesting hacks but the fact remains - Splunk works one event at a time. So you can't "carry over" any info from one event to another using... See more...
It's not about "whose is longer". And yes, I've seen many interesting hacks but the fact remains - Splunk works one event at a time. So you can't "carry over" any info from one event to another using just props and transforms (except for that very very ugly and unmaintainable trick with actually cloning the event and separately modifying each copy). Also you cannot split an event (or merge it) after it's been through the line breaking/merging phase. So you can't turn {"whatever": ["a","b","c"], "something":"something"} into {"whatever": "a", "something":"something"} {"whatever": "b", "something":"something"} {"whatever": "c", "something":"something"} Using props and transforms alone. Ingestion pipeline doesn't deal with structured data (with the exception of indexed extractions on UF but that's a different story).  
I was looking to figure out how to do this too and searched all through the inputs.conf documentation and couldn't find what I was looking for. Would be cool and useful.
Awesome! Thank you so much!
The alert will run every day until you change the schedule, disable the alert, or delete it. What expires are the search results found by the alert.
Any chance you can accept something as simple as converting the crt to pem and appending to existing pem file?
Sure, and thanks for asking. The data file is called "tutorialdata.zip", and was downloaded from the Splunk site here: https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchTutorial/Systemrequir... See more...
Sure, and thanks for asking. The data file is called "tutorialdata.zip", and was downloaded from the Splunk site here: https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchTutorial/Systemrequirements#Download_the_tutorial_data_files Thanks again. Avery
Hello, I am confused about the "Expires" thing when setting an alert. I have my alert scheduled every day and the expires = 24 hours, does that mean after 24 hours the alert will NOT run no more? Tha... See more...
Hello, I am confused about the "Expires" thing when setting an alert. I have my alert scheduled every day and the expires = 24 hours, does that mean after 24 hours the alert will NOT run no more? Thank you.
Thanks for the advice. Well after working with Splunk for +10 years I frankly don’t agree with the “simple string-based manipulation that Splunk can in the ingestion pipe”, I’d say I’ve seen amazin... See more...
Thanks for the advice. Well after working with Splunk for +10 years I frankly don’t agree with the “simple string-based manipulation that Splunk can in the ingestion pipe”, I’d say I’ve seen amazing (to the extend crazy) things done with props and transforms. Said that, Splunk might not be able to do exactly what I’m after here, but I’m willing to spend time trying anyway, as this will have a major impact on the performance at search time. Yes, there are some meta data that need to stay with each event to be able to find them again. I have some ideas in my head on how to twist this, but right now I’m on vacation, and can’t test them the next weeks time or so, so I’m just “warming up”, and looking for / listening in to others crazy ideas of what they have achieved in Splunk
Yes, I used that image but it still didn't work. Thanks for sharing the documentation.
Hello, I am confused about the "Expires" thing when setting an alert. I have my alert scheduled every day and the expires = 24 hours, does that mean after 24 hours the alert will NOT run no more? Tha... See more...
Hello, I am confused about the "Expires" thing when setting an alert. I have my alert scheduled every day and the expires = 24 hours, does that mean after 24 hours the alert will NOT run no more? Thank you.
Is there a native way to run scripts in pwsh.exe managed environment? It's not mentioned in docs so I believe not: https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Inputsconf   We all know... See more...
Is there a native way to run scripts in pwsh.exe managed environment? It's not mentioned in docs so I believe not: https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Inputsconf   We all know there is [powershell://<name>] in inputs.conf to run "classic" powershell scripts. Actually, it runs script in "classic" PowerShell environment. Depending on which Windows version/build Universal Forwarder is installed on, it will be PS version up to 5.1 (which is managed by powershell.exe binary btw). But now we have a brand-new PowerShell Core (managed by a different binary: pwsh.exe). PowerShell Core have new features, not available in "classic" PowerShell and they're not 100% compatible. Additionally, PowerShell Core is platform agnostic - so we can install it on Linux and run PowerShell Core based scripts there (don't ask me why anyone would do that, but it's possible). Currently I'm running PowerShell Core scripts, by starting batch script in cmd environment, then cmd starts pwsh.exe with defined parameter to run my PowerShell Core based script - not elegant at all.