All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@SN1  stats latest(SensorHealthState) by DeviceName is far more efficient than dedup, especially when you're only interested in the most recent state. It reduces the dataset early and avoids unneces... See more...
@SN1  stats latest(SensorHealthState) by DeviceName is far more efficient than dedup, especially when you're only interested in the most recent state. It reduces the dataset early and avoids unnecessary processing. Try below, index=endpoint_defender source="AdvancedHunting-DeviceInfo" (DeviceType=Workstation OR DeviceType=Server) | stats latest(SensorHealthState) as SensorHealthState latest(_time) as _time by DeviceName | search SensorHealthState IN ("active", "Inactive", "Misconfigured", "Impaired communications", "No sensor data") | rex field=DeviceDynamicTags "\"(?<code>(?!/LINUX)[A-Z]+)\"" | rex field=DeviceName "^(?<Hostname>[^.]+)" | lookup lkp-GlobalIpRange.csv code OUTPUT "Company Code", Region | eval Region=mvindex(Region, 0) | search DeviceName="bie-n1690.emea.duerr.int" | table Hostname code "Company Code" DeviceName _time Region SensorHealthState Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@SN1  stats latest(SensorHealthState) by DeviceName is efficient than dedup, especially when you're only interested in the most recent state. It reduces the dataset early and avoids unnecessary proc... See more...
@SN1  stats latest(SensorHealthState) by DeviceName is efficient than dedup, especially when you're only interested in the most recent state. It reduces the dataset early and avoids unnecessary processing. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!  
@SN1  stats latest(SensorHealthState) by DeviceName is far efficient than dedup, especially when you're only interested in the most recent state. It reduces the dataset early and avoids unnecessary ... See more...
@SN1  stats latest(SensorHealthState) by DeviceName is far efficient than dedup, especially when you're only interested in the most recent state. It reduces the dataset early and avoids unnecessary processing. Try below, index=endpoint_defender source="AdvancedHunting-DeviceInfo" (DeviceType=Workstation OR DeviceType=Server) | stats latest(SensorHealthState) as SensorHealthState latest(_time) as _time by DeviceName | search SensorHealthState IN ("active", "Inactive", "Misconfigured", "Impaired communications", "No sensor data") | rex field=DeviceDynamicTags "\"(?<code>(?!/LINUX)[A-Z]+)\"" | rex field=DeviceName "^(?<Hostname>[^.]+)" | lookup lkp-GlobalIpRange.csv code OUTPUT "Company Code", Region | eval Region=mvindex(Region, 0) | search DeviceName="bie-n1690.emea.duerr.int" | table Hostname code "Company Code" DeviceName _time Region SensorHealthState Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@SN1  stats latest(SensorHealthState) by DeviceName is far more efficient than dedup, especially when you're only interested in the most recent state. It reduces the dataset early and avoids unneces... See more...
@SN1  stats latest(SensorHealthState) by DeviceName is far more efficient than dedup, especially when you're only interested in the most recent state. It reduces the dataset early and avoids unnecessary processing. Try below, index=endpoint_defender source="AdvancedHunting-DeviceInfo" (DeviceType=Workstation OR DeviceType=Server) | stats latest(SensorHealthState) as SensorHealthState latest(_time) as _time by DeviceName | search SensorHealthState IN ("active", "Inactive", "Misconfigured", "Impaired communications", "No sensor data") | rex field=DeviceDynamicTags "\"(?<code>(?!/LINUX)[A-Z]+)\"" | rex field=DeviceName "^(?<Hostname>[^.]+)" | lookup lkp-GlobalIpRange.csv code OUTPUT "Company Code", Region | eval Region=mvindex(Region, 0) | search DeviceName="bie-n1690.emea.duerr.int" | table Hostname code "Company Code" DeviceName _time Region SensorHealthState Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@LOP22456  The error highlights that only the admin role currently has write access to the specific saved search. Looks like saved search permission need to be set correctly. So go to your saved se... See more...
@LOP22456  The error highlights that only the admin role currently has write access to the specific saved search. Looks like saved search permission need to be set correctly. So go to your saved search->Edit Perimssion->Change Write access to include XXX-power Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Problem resolved. Finding: suspect v9.0 update changed from Table API to Import Set API which causing import transformer role not working in servicenow for all instances (TEST/DEV/PROD) Solution:... See more...
Problem resolved. Finding: suspect v9.0 update changed from Table API to Import Set API which causing import transformer role not working in servicenow for all instances (TEST/DEV/PROD) Solution: Goto ServiceNow and re-add import_transformer role to Integration ID for all instances (TEST/DEV/PROD)
@SN1  There is some other stuff going on in your search that is odd - 3 times the same lookup and the renaming of code and CC is unnecessary. You can distill this down and optimise it by doing the ... See more...
@SN1  There is some other stuff going on in your search that is odd - 3 times the same lookup and the renaming of code and CC is unnecessary. You can distill this down and optimise it by doing the stats latest early in the piece instead of dedup, which is not a command that should be used unless it's really needed, then doing the remainder of rex/eval/lookup tasks on the small subset of data.  Your device type search should be done up top - The other searches are looking at latest state so are in the right place. But even this example doesn't deal with your "Company Code" field, which you've used OUTPUTNEW for, but then do not use, so you can probably junk the coalesce there, but without knowing your data, it's hard to say... index=endpoint_defender source="AdvancedHunting-DeviceInfo" DeviceType IN ("Workstation","Server") ``` These are the fields you want ``` | fields DeviceType DeviceName SensorHealthState Timestamp DeviceDynamicTags ``` So get the latest of each field for the device ``` | stats latest(*) as * by DeviceName ``` Extract the fields you want ``` | rex field=DeviceDynamicTags "\"(?<CC>(?!/LINUX)[A-Z]+)\"" | rex field=Timestamp "(?<timeval>\d{4}-\d{2}-\d{2})" | rex field=DeviceName "^(?<Hostname>[^.]+)" ``` Only a single lookup is needed ``` | lookup lkp-GlobalIpRange.csv 3-Letter-Code as CC OUTPUT "Company Code" as 4LetCode Region ``` Your use of OUTPUTNEW is handled this way ``` | eval "Company Code"=coalesce('Company Code', 4LetCode) | eval Region=mvindex('Region',0) , "4LetCode"=mvindex('4LetCode',0) | search DeviceName="bie-n1690.emea.duerr.int" | search SensorHealthState = "active" OR SensorHealthState = "Inactive" OR SensorHealthState = "Misconfigured" OR SensorHealthState = "Impaired communications" OR SensorHealthState = "No sensor data" | table Hostname CC 4LetCode DeviceName timeval Region SensorHealthState  
I have a question regarding creating browser tests in Synthetic Monitoring. The website I'm testing generates dynamic IDs for DOM elements, which makes it unreliable to use id attributes for actions ... See more...
I have a question regarding creating browser tests in Synthetic Monitoring. The website I'm testing generates dynamic IDs for DOM elements, which makes it unreliable to use id attributes for actions like clicking buttons or links. I attempted to use full XPath expressions instead, but the site frequently introduces banners (e.g., announcements) that alter the DOM structure and shift element positions, causing the XPath to break. I'm wondering if there's a more resilient approach to locating elements. For example, is it possible to run a JavaScript snippet to search for an element by its visible text or attribute value, and then use that reference in subsequent steps or click on the element via the JavaScript? If so, how can I implement this? Alternatively, are there best practices or recommended locator strategies for handling dynamic content in Synthetic browser tests?
Any thoughts why it is showing in GMT?  It is currently 4 hours ahead, I am EST. "ds_LYYZ83TP": { "name": "LastUpdatedAuthByResults", "options": { "enableSmartSources": true, "quer... See more...
Any thoughts why it is showing in GMT?  It is currently 4 hours ahead, I am EST. "ds_LYYZ83TP": { "name": "LastUpdatedAuthByResults", "options": { "enableSmartSources": true, "query": "| makeresults | eval _time=\"$AuthorizationsBySource:job.lastUpdated$\", unixTimeStamp=strptime(_time, \"%Y-%m-%dT%H:%M:%S.%QZ\"), friendlyTime=strftime(unixTimeStamp,\"%Y-%m-%d %H:%M:%S\")" }, "type": "ds.search" }, My user preferences have my Time Zone as (GMT-400) Eastern Time (US & Canada) My current time is 19:18:01  
Hi JJ,   Unfortunately the AppInspect team decided on a whim to fail the manual vetting for the latest version of this app. (it passed every version before, the reviewer was just having a bad day) ... See more...
Hi JJ,   Unfortunately the AppInspect team decided on a whim to fail the manual vetting for the latest version of this app. (it passed every version before, the reviewer was just having a bad day) You have two options: 1. Upload as private app, to do this: change the appid in app.conf, rename the folder to match the new appid, gzip it up and upload as a private app. Note: this is will break the connection to Splunkbase, you won't be able to automatically update the app from within Splunk. 2. Wait for me to upload a new version. Maybe the reviewer will be in a better mood.
You are correct.  This is a Dashboard Studio in Absolute (not Grid).  I will try out your suggestion tonight.
If you run a fairly modern UF by means of systemd unit, it should get a CAP_DAC_READ_SEARCH capability which allows it to read files it normally wouldn't have access to (without it you would need to ... See more...
If you run a fairly modern UF by means of systemd unit, it should get a CAP_DAC_READ_SEARCH capability which allows it to read files it normally wouldn't have access to (without it you would need to do heavy file permissions magic to ingest logs). If you simply su/sudo to the splunkfwd user you don't have those capabilities.  
I tried to use what you provided with my data. I think it can work, but I am using a summary index and not the _internal index. Inside of that summary index, the actual indexes are named "Indexes" . ... See more...
I tried to use what you provided with my data. I think it can work, but I am using a summary index and not the _internal index. Inside of that summary index, the actual indexes are named "Indexes" . I posted below my attempt to gel your search and my stuff together. Maybe you can help me now that you know this info | tstats count where index=dg_app_summary NOT (Indexes="All*" OR Indexes="Undefined" OR Indexes="_*") earliest=-30d@d latest=now by _time, Indexes span=1d | stats sum(count) as svc_usage by Indexes _time ``` 1. Build a baseline for every index - Replace these lines with your original SVC search``` | where Indexes=proxy OR Indexes=aws ```2. 30‑day avg per index``` | eventstats avg(svc_usage) as avg_svc by Indexes ```3. Keep only the last day (the day you are currently monitoring)``` | where _time >= relative_time(now(), "-1d") ```4. Thresholds – 25% above or below the 30‑day average``` | eval si_high = avg_svc * 1.25 | eval si_low = avg_svc * 0.75 ```5. Find any day that is outside the band``` | where svc_usage > si_high OR svc_usage < si_low ```6. Show the top 10 indexes by daily usage (optional)``` | sort 0 -svc_usage | head 10 | table _time Indexes svc_usage avg_svc si_high si_low
Hi @bwheelerice1  You could do something like this to look across the avg svc usage for the last 30 days per index (you need to update the first few lines with your svc search) and then determine th... See more...
Hi @bwheelerice1  You could do something like this to look across the avg svc usage for the last 30 days per index (you need to update the first few lines with your svc search) and then determine the avg svc and then filter if 25% above/below the avg: | tstats count where index=_internal earliest=-30d@d latest=now by _time, host span=1d | rename host AS index | stats sum(count) as svc by index _time ``` 1. Build a baseline for every index - Replace these lines with your original SVC search``` ```2. 30‑day avg per index``` | eventstats avg(svc) as avg_svc by index ```3. Keep only the last day (the day you are currently monitoring)``` | where _time >= relative_time(now(), "-1d") ```4. Thresholds – 25% above or below the 30‑day average``` | eval si_high = avg_svc * 1.25 | eval si_low = avg_svc * 0.75 ```5. Find any day that is outside the band``` | where svc > si_high OR svc < si_low ```6. Show the top 10 indexes by daily usage (optional)``` | sort 0 -svc | head 10 | table _time index svc avg_svc si_high si_low  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Wooly  If its for a dashboard studio dashboard then you can use an additional search to convert the time format back to unix timestamp and then to whatever friendly time format you like, such as... See more...
Hi @Wooly  If its for a dashboard studio dashboard then you can use an additional search to convert the time format back to unix timestamp and then to whatever friendly time format you like, such as:  Here is the JSON for the dashboard for you to play with: { "title": "SetLastUpdatedTime", "description": "", "inputs": {}, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } }, "visualizations": { "global": { "showProgressBar": true } } }, "visualizations": { "viz_MRGXRquA": { "options": { "fontColor": "#0000ff", "markdown": "Last Updated: **$PieSearchLastUpdated:result.friendlyTime$**" }, "type": "splunk.markdown" }, "viz_z5OzyBTT": { "dataSources": { "primary": "ds_ZBRBhP7a" }, "options": {}, "type": "splunk.pie" } }, "dataSources": { "ds_LYYZ83TP": { "name": "PieSearchLastUpdated", "options": { "enableSmartSources": true, "query": "| makeresults \n| eval _time=\"$PieSearch:job.lastUpdated$\", unixTimeStamp=strptime(_time, \"%Y-%m-%dT%H:%M:%S.%QZ\"), friendlyTime=strftime(unixTimeStamp,\"%d/%m/%Y %H:%M:%S\")" }, "type": "ds.search" }, "ds_ZBRBhP7a": { "name": "PieSearch", "options": { "enableSmartSources": true, "query": "| tstats count where index=_internal earliest=-12h latest=now by host" }, "type": "ds.search" } }, "layout": { "globalInputs": [], "layoutDefinitions": { "layout_1": { "options": { "display": "auto", "height": 960, "width": 1440 }, "structure": [ { "item": "viz_z5OzyBTT", "position": { "h": 300, "w": 400, "x": 10, "y": 0 }, "type": "block" }, { "item": "viz_MRGXRquA", "position": { "h": 30, "w": 250, "x": 160, "y": 270 }, "type": "block" } ], "type": "absolute" } }, "options": {}, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Oddly enough, that seems to raise as many questions as it answers. When I run the script manually from CMD while sudo'ed as splunkfwd, I get an error message indicating that splunkfwd doesn't have p... See more...
Oddly enough, that seems to raise as many questions as it answers. When I run the script manually from CMD while sudo'ed as splunkfwd, I get an error message indicating that splunkfwd doesn't have permission to access /var/log/audit/audit.log nor to access /etc/audit/audit.conf.  It isn't clear to me we why it is that when splunkfwd is being used by Splunk UF to executed scripted inputs, it seems to have the necessary permissions to perform "ausearch" and run to completion without errors (at least in the firs round when no checkpoint exists yet), but when I try to execute the same script as the same user manually from CMD, suddenly, I don't have the necessary permissions. Here are the environment variables that were in place during the execution of the script: _=/bin/printenv HISTSIZE=1000 HOME=/opt/splunkforwarder HOSTNAME=<deployment_client_name> LANG=en_US.UTF-8 LOGNAME=splunkfwd LS_COLORS=rs=0:di=38;5;33:ln=38;5;51: ... etc MAIL=/var/spool/mail/miketbrand0 PATH=/sbin:/bin:/usr/sbin:/usr/bin PWD=/home/miketbrand0 SHELL=/bin/bash SHLVL=1 SUDO_COMMAND=/bin/bash /opt/splunkforwarder/etc/apps/<app_name>/bin/audit_log_retreiver.sh SUDO_GID=0 SUDO_UID=0 SUDO_USER=root TERM=xterm-256color USER=splunkfwd When I got the permission denied error, ausearch exited with an exit code of 1, indicating that there were no matches found in the search results (which is a bit disingenuous because it never actually got to look for matches), but after I ran the script as root once and then re-owned the checkpoint file to belong to splunkfwd, I tried running the script as splunkfwd again.  This time ausearch yielded an exit code of 10 which is consistent with what I have observed when Splunk UF executes the script. I think that means that whatever problem is causing ausearch to interpret the checkpoint as corrupted lies if the splunkfwd user, and not with Splunk UF.
Just following on from my last message - are you sure this is a classic dashboard and not Dashboard Studio dashboard? Classic XML dashboards dont have the ability to overlay markdown quite like you h... See more...
Just following on from my last message - are you sure this is a classic dashboard and not Dashboard Studio dashboard? Classic XML dashboards dont have the ability to overlay markdown quite like you have in your screenshot? I'll look at putting together a solution based on Dashboard Studio in the meantime.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Wooly  You could set a token with eval such as: <search id="base"> <query>index=something etc..etc...</query> <done> <eval token="lastUpdated">strftime(now(),"%d/%m/%Y, %I:%M %p")</eval... See more...
Hi @Wooly  You could set a token with eval such as: <search id="base"> <query>index=something etc..etc...</query> <done> <eval token="lastUpdated">strftime(now(),"%d/%m/%Y, %I:%M %p")</eval> </done> </search> Then you could reference with $lastUpdated$  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
It's a daily alert: Some days like Saturday or Sunday might not lower than Monday and Tuesday. But let's say last Monday the highest SVC was 140. But this Monday it was 200. I want to know that ha... See more...
It's a daily alert: Some days like Saturday or Sunday might not lower than Monday and Tuesday. But let's say last Monday the highest SVC was 140. But this Monday it was 200. I want to know that happened. it can be percentage or statistical.   I tried to use the MTLK command but I kept getting an error. 
If this is supposed to be a statically thresholded alert, you can just add | where (your_condition_maching_excessive_usage) And alert if you get any results. If you would like to have some form of... See more...
If this is supposed to be a statically thresholded alert, you can just add | where (your_condition_maching_excessive_usage) And alert if you get any results. If you would like to have some form of dynamic thresholding based on previous values... That might need some more funky logic and possibly include MLTK