All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Wooly  If its for a dashboard studio dashboard then you can use an additional search to convert the time format back to unix timestamp and then to whatever friendly time format you like, such as... See more...
Hi @Wooly  If its for a dashboard studio dashboard then you can use an additional search to convert the time format back to unix timestamp and then to whatever friendly time format you like, such as:  Here is the JSON for the dashboard for you to play with: { "title": "SetLastUpdatedTime", "description": "", "inputs": {}, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } }, "visualizations": { "global": { "showProgressBar": true } } }, "visualizations": { "viz_MRGXRquA": { "options": { "fontColor": "#0000ff", "markdown": "Last Updated: **$PieSearchLastUpdated:result.friendlyTime$**" }, "type": "splunk.markdown" }, "viz_z5OzyBTT": { "dataSources": { "primary": "ds_ZBRBhP7a" }, "options": {}, "type": "splunk.pie" } }, "dataSources": { "ds_LYYZ83TP": { "name": "PieSearchLastUpdated", "options": { "enableSmartSources": true, "query": "| makeresults \n| eval _time=\"$PieSearch:job.lastUpdated$\", unixTimeStamp=strptime(_time, \"%Y-%m-%dT%H:%M:%S.%QZ\"), friendlyTime=strftime(unixTimeStamp,\"%d/%m/%Y %H:%M:%S\")" }, "type": "ds.search" }, "ds_ZBRBhP7a": { "name": "PieSearch", "options": { "enableSmartSources": true, "query": "| tstats count where index=_internal earliest=-12h latest=now by host" }, "type": "ds.search" } }, "layout": { "globalInputs": [], "layoutDefinitions": { "layout_1": { "options": { "display": "auto", "height": 960, "width": 1440 }, "structure": [ { "item": "viz_z5OzyBTT", "position": { "h": 300, "w": 400, "x": 10, "y": 0 }, "type": "block" }, { "item": "viz_MRGXRquA", "position": { "h": 30, "w": 250, "x": 160, "y": 270 }, "type": "block" } ], "type": "absolute" } }, "options": {}, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Oddly enough, that seems to raise as many questions as it answers. When I run the script manually from CMD while sudo'ed as splunkfwd, I get an error message indicating that splunkfwd doesn't have p... See more...
Oddly enough, that seems to raise as many questions as it answers. When I run the script manually from CMD while sudo'ed as splunkfwd, I get an error message indicating that splunkfwd doesn't have permission to access /var/log/audit/audit.log nor to access /etc/audit/audit.conf.  It isn't clear to me we why it is that when splunkfwd is being used by Splunk UF to executed scripted inputs, it seems to have the necessary permissions to perform "ausearch" and run to completion without errors (at least in the firs round when no checkpoint exists yet), but when I try to execute the same script as the same user manually from CMD, suddenly, I don't have the necessary permissions. Here are the environment variables that were in place during the execution of the script: _=/bin/printenv HISTSIZE=1000 HOME=/opt/splunkforwarder HOSTNAME=<deployment_client_name> LANG=en_US.UTF-8 LOGNAME=splunkfwd LS_COLORS=rs=0:di=38;5;33:ln=38;5;51: ... etc MAIL=/var/spool/mail/miketbrand0 PATH=/sbin:/bin:/usr/sbin:/usr/bin PWD=/home/miketbrand0 SHELL=/bin/bash SHLVL=1 SUDO_COMMAND=/bin/bash /opt/splunkforwarder/etc/apps/<app_name>/bin/audit_log_retreiver.sh SUDO_GID=0 SUDO_UID=0 SUDO_USER=root TERM=xterm-256color USER=splunkfwd When I got the permission denied error, ausearch exited with an exit code of 1, indicating that there were no matches found in the search results (which is a bit disingenuous because it never actually got to look for matches), but after I ran the script as root once and then re-owned the checkpoint file to belong to splunkfwd, I tried running the script as splunkfwd again.  This time ausearch yielded an exit code of 10 which is consistent with what I have observed when Splunk UF executes the script. I think that means that whatever problem is causing ausearch to interpret the checkpoint as corrupted lies if the splunkfwd user, and not with Splunk UF.
Just following on from my last message - are you sure this is a classic dashboard and not Dashboard Studio dashboard? Classic XML dashboards dont have the ability to overlay markdown quite like you h... See more...
Just following on from my last message - are you sure this is a classic dashboard and not Dashboard Studio dashboard? Classic XML dashboards dont have the ability to overlay markdown quite like you have in your screenshot? I'll look at putting together a solution based on Dashboard Studio in the meantime.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Wooly  You could set a token with eval such as: <search id="base"> <query>index=something etc..etc...</query> <done> <eval token="lastUpdated">strftime(now(),"%d/%m/%Y, %I:%M %p")</eval... See more...
Hi @Wooly  You could set a token with eval such as: <search id="base"> <query>index=something etc..etc...</query> <done> <eval token="lastUpdated">strftime(now(),"%d/%m/%Y, %I:%M %p")</eval> </done> </search> Then you could reference with $lastUpdated$  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
It's a daily alert: Some days like Saturday or Sunday might not lower than Monday and Tuesday. But let's say last Monday the highest SVC was 140. But this Monday it was 200. I want to know that ha... See more...
It's a daily alert: Some days like Saturday or Sunday might not lower than Monday and Tuesday. But let's say last Monday the highest SVC was 140. But this Monday it was 200. I want to know that happened. it can be percentage or statistical.   I tried to use the MTLK command but I kept getting an error. 
If this is supposed to be a statically thresholded alert, you can just add | where (your_condition_maching_excessive_usage) And alert if you get any results. If you would like to have some form of... See more...
If this is supposed to be a statically thresholded alert, you can just add | where (your_condition_maching_excessive_usage) And alert if you get any results. If you would like to have some form of dynamic thresholding based on previous values... That might need some more funky logic and possibly include MLTK
Currently using Dashboard classic and added Markdown Text to the bottom of my pie chart to inform the user when the data was last update.  Is there a way in the Markdown Text to format job.lastUpdate... See more...
Currently using Dashboard classic and added Markdown Text to the bottom of my pie chart to inform the user when the data was last update.  Is there a way in the Markdown Text to format job.lastUpdated? It is currently showing in Zulu. I was also thinking of putting it in the description field, if possible.    
We currently have a search that shows a timeline graph of daily SVC usage by index. 10 of these indexes are our highest for SVC usage. I would like to create an alert if the SVC usage for any of thos... See more...
We currently have a search that shows a timeline graph of daily SVC usage by index. 10 of these indexes are our highest for SVC usage. I would like to create an alert if the SVC usage for any of those indexes goes 25% higher or lower than the normal amount. Example: index=test normally uses 100 to 140 SVC per day. the alert will tell us when that index goes 25% over 140 or under 100. We want the search to do this for at least our top 10 SVC usage indexes.  Our current timechart search is as follows: index=svc_summary | timechart limit=10 span=1d useother=f sum(svc_usage) by Indexes
While you are not directly using env vars they might influence behaviour of spawned processes. In your case the possibly important main difference is the LD_LIBRARY_PATH env. Set this variable in you... See more...
While you are not directly using env vars they might influence behaviour of spawned processes. In your case the possibly important main difference is the LD_LIBRARY_PATH env. Set this variable in your interactive shell session to the forwarder's value and try running ausearch.
There are some important differences e.g. USER are different. When running from UF it’s splunkfwd and other cases its root. Have you try to run it commandline as user splunkfwd not with root or sudo ... See more...
There are some important differences e.g. USER are different. When running from UF it’s splunkfwd and other cases its root. Have you try to run it commandline as user splunkfwd not with root or sudo root? Also at least libraries have different order to use. It’s easier to check different when you sort those env variables before looking. Also it’s easy to cat those into same sort | uniq -c to check are there something which are missing or different in separate runs.
I'm not sure how environment variables would factor in considering that none of them are being used in my script, and all file paths are fully elaborated, but here it goes: Environment variables whe... See more...
I'm not sure how environment variables would factor in considering that none of them are being used in my script, and all file paths are fully elaborated, but here it goes: Environment variables when running the script manually in CMD: LS_COLORS=rs=0:di=38;5;33:ln=38;5;51: ... (I hope I don't have to elaborate all of this) LANG=en_US.UTF-8 SUDO_GID=1000 HOSTNAME=<deployment_client_name> SUDO_COMMAND=/bin/bash /opt/splunkforwarder/etc/apps/<app_name>/bin/audit_log_retreiver.sh USER=root PWD=/home/miketbrand0 HOME=/root SUDO_USER=miketbrand0 SUDO_UID=1000 MAIL=/var/spool/mail/miketbrand0 SHELL=/bin/bash TERM=xterm-256color SHLVL=1 LOGNAME=root PATH=/sbin:/bin:/usr/sbin:/usr/bin HISTSIZE=1000 _=/bin/printenv   Environment variables when running the script using the Splunk Universal Forwarder: LD_LIBRARY_PATH=/opt/splunkforwarder/lib LANG=en_US.UTF-8 TZ=:/etc/localtime OPENSSL_CONF=/opt/splunkforwarder/openssl/openssl.cnf HOSTNAME=<deployment_client_name> INVOCATION_ID=bdfc92da21b4sdb0a759a5997d9a85 USER=splunkfwd SPLUNK_HOME=/opt/splunkforwarder PYTHONHTTPSVERIFY=0 PWD=/ HOME=/opt/splunkforwarder PYTHONUTF8=1 JOURNAL_STREAM=9:4867979 SSL_CERT_FILE=/opt/splunkforwarder/openssl/cert.pem SPLUNK_OS_USER=splunkfwd SPLUNK_ETC=/opt/splunkforwarder/etc LDAPCONF=/opt/splunkforwarder/etc/openldap/ldap.conf SHELL=/bin/bash SPLUNK_SERVER_NAME=SplunkForwarder OPENSSL_FIPS=1 SPLUNK_DB=/opt/splunkforwarder/var/lib/splunk ENABLE_CPUSHARES=true SHLVL=2 LOGNAME=splunkfwd PATH=/opt/splunkforwarder/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin _=/usr/bin/printenv   Environment variables when running as a cron job: LANG=en_US.UTF-8 XDG_SESSION_ID=4157 USER=root PWD=/root HOME=/root SHELL=/bin/sh SHLVL=2 LOGNAME=root DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus XDG_RUNTIME_DIR=/run/user/0 PATH=/usr/bin:/bin _=/usr/bin/printenv   Nothing stands out to me as something that manual CMD and the cron job have in common that Splunk UF does differently that would impact the functionality of ausearch in the script.  Do you see anything that I might be missing?
Hi @LOP22456  Further to my other comment, can you also confirm that there isnt another search already in the app with the same name? Can you also confirm that the xxx-power has write access to the ... See more...
Hi @LOP22456  Further to my other comment, can you also confirm that there isnt another search already in the app with the same name? Can you also confirm that the xxx-power has write access to the app aswell as xxx-admin?  The issue I believe lies in the fact that only xxx-admin has write access to the search.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @LOP22456  It looks like the write permission on the savedsearch that they are trying to share doesnt include the xxx-power role, if the user sets the write permission to xxx-power instead of xxx... See more...
Hi @LOP22456  It looks like the write permission on the savedsearch that they are trying to share doesnt include the xxx-power role, if the user sets the write permission to xxx-power instead of xxx-admin does it allow them to share?  Has the search been changed to be owned by a different user (e.g. nobody) prior to the user attempting to share it within the app?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @jjsplunk  Unfortunately that app isnt Splunk Cloud compatible which is why its not showing in the App browser in Splunk Cloud. You are also not able (as you have found) to upload an app with th... See more...
Hi @jjsplunk  Unfortunately that app isnt Splunk Cloud compatible which is why its not showing in the App browser in Splunk Cloud. You are also not able (as you have found) to upload an app with the same ID as an app in Splunkbase. Under versions if an app is compatible then Splunk Cloud will be listed in addition to Splunk Enterprise. I wouldnt necessarily recommend it due to support headaches, but I have known people to modify the ID of an app downloaded from Splunkbase (in the app.conf) and then upload it as a private app. I wouldnt attempt this if you are unfamiliar with building private apps for Splunk Cloud. As I said, this also introduces support headaches etc and will not be automatically updated. Theres also nothing to guarantee the app will pass Appinspect as a private app either.    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
We have a search app that a group of users are working from. All of the users have power role and we have given the power role write permissions into the app. When they try to share some saved search... See more...
We have a search app that a group of users are working from. All of the users have power role and we have given the power role write permissions into the app. When they try to share some saved searches, they are getting this error:   User 'XXXX' with roles { XXX-power, XXX-user } cannot write: /nobody/XXX_Splunk_app/savedsearches/ Test { read : [ XXX-power, XXX-user ], write : [ XXX-admin ] }, export: app, owner: nobody, removable: no, modtime: 1754574650.678659000   From what I read online, once a user is given Write permissions into the App, they can share their KOs. Am I doing something wrong here or has this since changed?  
Hi @danielbb  Try removing the "P" from each extraction - Splunk uses PCRE (Perl Compatible Regular Expressions) Regex not RE2 so does not include the P in the named extraction. I also noticed that... See more...
Hi @danielbb  Try removing the "P" from each extraction - Splunk uses PCRE (Perl Compatible Regular Expressions) Regex not RE2 so does not include the P in the named extraction. I also noticed that you mentioned "calculated" field extraction - this expects something can be eval'd not a regex. What you need to use is a "Field Extractions" if editing in the UI, and then add the regex in the "Extraction/Transform" field.      Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I have this regex - ^(?:[^ \\n]* ){7}(?P<src_host>[^ ]+)[^:\\n]*:\\s+(?P<event_id>[a-f0-9]+:\\d+)(?:[^/\\n]*/){2}(?P<dest_zone>[^\\.]+) I put it in the field extraction with the right sourcetype ... See more...
I have this regex - ^(?:[^ \\n]* ){7}(?P<src_host>[^ ]+)[^:\\n]*:\\s+(?P<event_id>[a-f0-9]+:\\d+)(?:[^/\\n]*/){2}(?P<dest_zone>[^\\.]+) I put it in the field extraction with the right sourcetype as inline field extraction, and it still won't show the extracted fields when searched.  _internal shows that its status is - "applied" Any idea why?
Hello, I'm trying to install https://splunkbase.splunk.com/app/5022 in a Splunk Cloud instance. If I download the app file and try to install it manually, I get this error: "This app is available ... See more...
Hello, I'm trying to install https://splunkbase.splunk.com/app/5022 in a Splunk Cloud instance. If I download the app file and try to install it manually, I get this error: "This app is available for installation directly from Splunkbase. To install this app, use App Browser page in Splunk Web" In Splunk - Find more apps - I searched for HTTP, HTTP Alert, Brendan's name... and the app is not showing up. Could anyone advise whether I'm doing something wrong or how I can get this app installed? Thanks in advance
It has been a recommended way for a long time to send directly from UFs to indexers but even Splunk acknowledges the practice of having an intermediate layer of HFs (which has its pros and its cons) ... See more...
It has been a recommended way for a long time to send directly from UFs to indexers but even Splunk acknowledges the practice of having an intermediate layer of HFs (which has its pros and its cons) - https://docs.splunk.com/Documentation/SVA/current/Architectures/Intermediaterouting