All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am having issues with my splunk universal fowarders. Problem: The Splunk Universal Forwarders are not upgrading from version 7.2.6 to Version 8 using the custom app I developed. However, Th... See more...
Hello, I am having issues with my splunk universal fowarders. Problem: The Splunk Universal Forwarders are not upgrading from version 7.2.6 to Version 8 using the custom app I developed. However, The custom app is a replica of 7.2.6. I created a another app that has the exact same features as version 7.2.6. However, once it shuts down, it does not restart or upgrade the server. Here is  the custom app. #!/bin/bash # set splunk path SPLUNK_HOME=/opt/splunkforwarder # set desired version NVER=8.2.2 # determine current version CVER=`cat $SPLUNK_HOME/etc/splunk.version | grep VERSION | cut -d= -f2` if [ "$NVER" != "$CVER" ] then echo "Upgrading Splunk to $NVER." $SPLUNK_HOME/bin/splunk stop tar -xvf  $SPLUNK_HOME/etc/apps/splunk_upgrade_lin_v8/static/splunkforwarder-8.2.2-87344edfcdb4-Linux-x86_64.tgz -C /opt $SPLUNK_HOME/bin/splunk start --accept-license --answer-yes fi In the static folder, it has the splunkforwarder-8.2.2-87344edfcdb4-Linux-x86_64.tgz. In the bin directory, the script above is the upgrade.sh and the wrapper.sh I created points to this upgrade.sh In the local directory, this is what I have listed. [script://./bin/wrapper.sh] disabled = false interval = 3600 sourcetype = upgrade_linuxv8 Once again. This custom apps work completely fine with 7.2.6. Any version after that, splunk just stops once the app is assigned to the client, then the splunkforwarder shuts down and doesn't come back until I force remove the app  (rm -rf) and restart splunk. Does Anyone has a work around with this?
I'm working on a linux machine hardened according to Center of Internet Security (CIS) hardening benchmarks.  This means its critical to determine, when installing a user "splunk" for the splunk univ... See more...
I'm working on a linux machine hardened according to Center of Internet Security (CIS) hardening benchmarks.  This means its critical to determine, when installing a user "splunk" for the splunk universal forwarder,  if the splunk user should be classified as a system user (useradd -r -m) or an interactive user (useradd -m).  Normally user added just to facilitate running software should be a system user - that would be least privilege and would be my best guess at how the splunk user should be configured.   Under the CIS hardening scheme, system users are prohibited from having passwords (the password is locked, and also prohibited from launching an interactive shell (the shell is set to /sbin/nologin).    This is done so that an attacker cannot assume the splunk user (via ssh or otherwise,  and gain interactive shell privileges. I've noted in the splunk documentation that "useradd -m" is specified, without the -r, indicating that the splunk user requires interactive user privileges (password/shell access).   Just checking if this is indeed the case, or if I can safely remove this privilege and make the splunk user a system user (no login or shell permitted).
Is there a bug in the df script that produces the wrong byte size for filesystems greater than 1 TB?  I'm running a search similar to this: index=linux sourcetype=df|dedup host|multikv|search host=... See more...
Is there a bug in the df script that produces the wrong byte size for filesystems greater than 1 TB?  I'm running a search similar to this: index=linux sourcetype=df|dedup host|multikv|search host="xxxxx" AND data4|chart eval(sum(Size)) as x The number that get returns is 2.2.   data4 is a 2.2TB filesystem and I was expecting to see the actually byte count of approximately 200000000 This bug throws off my numbers when I try to calculate the total storage on my systems.
I have data where I am calculating the difference between two timestamps and showing the difference in days:hh:mm:ss ...But in some cases if the the duration is greater than 99 days its not showing 1... See more...
I have data where I am calculating the difference between two timestamps and showing the difference in days:hh:mm:ss ...But in some cases if the the duration is greater than 99 days its not showing 100 .It shows something like 99+04:47:11 I am looking something like...if the duration is 103 days..the it should be 103+04:47:11..Is this possible on Splunk.     Thanks in Ad
Are Venn diagrams possible in Splunk? I did not see as an option, but I didn't know if it  was and add plugin or something of the sort. 
https://splunkbase.splunk.com/app/1724   recently upgraded to the new 4.0.0 version and am seeing a bug w the csvs now. it seems to be adding e and f columns and cannot delete them. which is caus... See more...
https://splunkbase.splunk.com/app/1724   recently upgraded to the new 4.0.0 version and am seeing a bug w the csvs now. it seems to be adding e and f columns and cannot delete them. which is causing an unwanted column in the actual | inputlookup asdf.csv   is this a known bug? is anyone else seeing this?
Hello,   I am trying to enable all file and directory inputs for the Linux add-on, but every time I attempt to save the new configuration I get an error that Splunk encountered an unexpected proble... See more...
Hello,   I am trying to enable all file and directory inputs for the Linux add-on, but every time I attempt to save the new configuration I get an error that Splunk encountered an unexpected problem and can't complete the task and to reload the page. No matter how much I reload or even hard stop and reset the server the result is the same. Any help is appreciated.
Hi, SPlunkers,    How to find out who is using my shared dashboard?   thx.   Kevin
Hi Splunk Community, I need to be able to calculate results based off of a time range picked by the user where the user also needs those events with their time stamps converted to user's time zone ... See more...
Hi Splunk Community, I need to be able to calculate results based off of a time range picked by the user where the user also needs those events with their time stamps converted to user's time zone as such. I have a SUBMITDATE field which needs to fall under the range of the time picker but search query is only picking _time from splunk as the filtering field, how should I filter based on the "SUBMITDATE" field and not _time? Any help is appreciated.
(Running v9.0.2208 of Splunk Cloud) When I load a dashboard with external URLs in they throw up an external content warning - how do I get rid of these? In the version we're running, I cannot updat... See more...
(Running v9.0.2208 of Splunk Cloud) When I load a dashboard with external URLs in they throw up an external content warning - how do I get rid of these? In the version we're running, I cannot update 'Settings > Server settings > Dashboards Trusted Domains List' as I believe that is only available in v9.0.2209. I'm also unable to enable automatic UI updates which is the fix in the current version. I've tried to create a web-features.conf but not having any luck. Thanks! this is my web-features.conf - I've also updated app.conf with a [triggers] stanza to manage web-features restarts [settings] # Allowing hyperlinks to load from trusted domains in Dashboard Studio [features:dashboards_csp] dashboards_trusted_domain.everything=*teams.microsoft.com
Hi, I have a dashboard where I have a dropdown with three values A, B and C, now if I click on value A it should set panel A, and If I choose value B it should load panel B and same for C also. now... See more...
Hi, I have a dashboard where I have a dropdown with three values A, B and C, now if I click on value A it should set panel A, and If I choose value B it should load panel B and same for C also. now that code I have developed but in the dashboard added a submit button and disabled searchwhenchange but still prior to click on submit button panels load automatically after choosing values from the dropdown. I need help on the submit button it should load panels only after clinking on the submit button. please find the code below which I have developed.     <form> <label>My Dashboard</label> <fieldset submitButton="true"> <input type="dropdown" token="dropdown_token" searchWhenChanged="false"> <label>dropdown_token</label> <default>A</default> <choice value="A">A</choice> <choice value="B">B</choice> <choice value="C">C</choice> <change> <condition match="'value'==&quot;A&quot;"> <set token="panelA">true</set> <unset token="panelB">false</unset> <unset token="panelC">false</unset> </condition> <condition match="'value'==&quot;B&quot;"> <unset token="panelA">true</unset> <set token="panelB">false</set> <unset token="panelC">false</unset> </condition> <condition match="'value'==&quot;C&quot;"> <unset token="panelA">true</unset> <unset token="panelB">false</unset> <set token="panelC">false</set> </condition> </change> </input> </fieldset> <row> <panel depends="$panelA$"> <table> <title>Panel A</title> <search> <query>index=a |table a b c</query> </search> </table> </panel> <panel depends="$panelB$"> <table> <title>Panel B</title> <search> <query>index=a |table a b c</query> </search> </table> </panel> <panel depends="$panelC$"> <table> <title>Panel C</title> <search> <query>index=a |table a b c</query> </search> </table> </panel> </row> </form>    
We currently have a multi-tier Splunk Enterprise instance with search-head clustering and indexer clustering. All of our data comes in from Universal Forwarders on remote VMs (thousands of them) fr... See more...
We currently have a multi-tier Splunk Enterprise instance with search-head clustering and indexer clustering. All of our data comes in from Universal Forwarders on remote VMs (thousands of them) from different customers. Our inputs.conf on all of the forwarders are set to send to only a couple of indexes in our indexer cluster.   We are planning a project to split these indexes on a per-customer basis. For example, index "main" would become "main-cust1", "main-cust2", etc.   The point behind this is to allow RBAC on a per-customer basis (by limited access to customer specific indexes).   Are there any additional storage or performance considerations that should be evaluated before pursuing this change?
Hi,  I am working with the Splunk Add on for Microsoft Azure and im trying to get the Secure Score working with it, has anyone had any luck with getting it working?  At the moment it looks like I... See more...
Hi,  I am working with the Splunk Add on for Microsoft Azure and im trying to get the Secure Score working with it, has anyone had any luck with getting it working?  At the moment it looks like I need to do it s the input being a resource graph, but it doesn't seem to be pulling that data through. It has been set up with the reader IAM role for the correct subscription (as suggested by their documentation)  The error I seem to be getting in splunk is as follows:      File "/opt/splunk/etc/apps/TA-MS-AAD/lib/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://management.azure.com/providers/Microsoft.ResourceGraph/resources?api-version=2021-03-01     Any help or advice would be appreciated. 
Data field  "FW: [ DOC 45 ] DTP: DEMO XXX CCC | 20147" from this I need to extract  "DEMO XXX CCC" output subject field "DEMO XXX CCC"
How do you filter out IPv6 and internal routed 169.254.0.0/16 from a multi-value field? Data Example HOST                    IP LIST hostA                   10.0.0.3, 10.3.4.6, 169.254.1.5, fe80::... See more...
How do you filter out IPv6 and internal routed 169.254.0.0/16 from a multi-value field? Data Example HOST                    IP LIST hostA                   10.0.0.3, 10.3.4.6, 169.254.1.5, fe80::2000:aff:fea7:f7c hostB                   10.0.0.2, 192.168.3.12, 169.254.8.9, fe80::2000:aff:fea7:d3c I have attempted using a number of combinations of mvfilter, match, cidrmatch and I can't get it to work. | eval ip_list_filter_IPv6 = mvfilter(match(ip_list_orig, "/\b(?:(?:2(?:[0-4][0-9]|5[0-5])|[0-1]?[0-9]?[0-9])\.){3}(?:(?:2([0-4][0-9]|5[0-5])|[0-1]?[0-9]?[0-9]))\b") | eval ip_list_filter_169 = mvfilter(match(ip_list_filter_IPv6, NOT cidrmatch(169.254.0.0/16,ip_list_filter_IPv6)) I thought cidrmatch might do it all but I believe it is not a validation macro but one that checks if an IP is in a given range.   Thanks for your help.
Hi all I'm struggling to make my chart how I want it. Basically what I currently have, is a graph with a lot of logs received from certain services. And that from the past 3 months. - I don't... See more...
Hi all I'm struggling to make my chart how I want it. Basically what I currently have, is a graph with a lot of logs received from certain services. And that from the past 3 months. - I don't understand why my months are ordered like this: 2022 December, 2023 February, 2023 January Where January should be in the middle. - Aside from this, my main struggle is to filter the top services with the highes   t logs. These are a lot higher than the other ones. So I'll have to make a 2nd graph with the smaller ones. How can I filter the the top (say 4) out? (AND srv!=*** is not the proper way to do it in this case) |dbxquery query="select to_char(received_ts,'YYYY Month') as Month,srv,sum(log_Count) as Total_Log_Count from gmssp.esm.esm_audit_day where client_id = **** AND received_ts>= DATE_TRUNC('month', current_date) - '3 month'::interval AND received_ts< DATE_TRUNC('month', current_date) AND SRV!='ignor' AND SRV!='UNK' group by srv, month" connection="******" | chart max(total_log_count) by srv month Thanks a lot for your help!
Hi all, How to give the range to that first and last if the date is in between last 3weeks till today which matches to first or last in the below splunk query. | eval first = strptime(first_detec... See more...
Hi all, How to give the range to that first and last if the date is in between last 3weeks till today which matches to first or last in the below splunk query. | eval first = strptime(first_detected, "%Y-%m-%dT%H:%M:%S.%3N%Z"), last= strptime(last_detected, "%Y-%m-%dT%H:%M:%S.%3N%Z") Thanks..
Hi,  looking for splunk query having field name similar to field in lookup file with respective value in lookup file. query have field "index" value is same as lookup file field "CAPNSplunk" valu... See more...
Hi,  looking for splunk query having field name similar to field in lookup file with respective value in lookup file. query have field "index" value is same as lookup file field "CAPNSplunk" value. if "index" field value matches with lookup file "CAPNSplunk" then "index" field value should get replaced with associated "RANSplunk" field value available in lookup file. lookup file: CAPNSplunk,RANSplunk "Pricing","Pricing Outlier" "Smart_Factory","Smart Factory BUCT" "SMARTFACTORY_LOGISTICS","Smart Factory Logistics" "SmartFactory_PM_Console","Smart Factory PM Console" "GCW_Dashboard","Global Contingent Worker Dashboard" "HRM_Spans_Layers","HRM - Spans & Layers" "Unity_Portal-Part_Aggregation","Unity Portal" "Blackbird_Dashboard","Blackbird" "WWops","WWOps" "AGS_metrology_AutoML","Metrology Auto ML Classification" "Action_Plan_Tracker","IDCL"   index: Pricing Smart_Factory SMARTFACTORY_LOGISTICS SmartFactory_PM_Console GCW_Dashboard HRM_Spans_Layers Unity_Portal-Part_Aggregation Blackbird_Dashboard WWops AGS_metrology_AutoML Action_Plan_Tracker   For example: if "index" field value is "Pricing" then it should get replaced with "Pricing Outlier" after looking into lookup file.
I want to extract 5degit. number 54879 as number field  
I have some Checkpoint logs (Firewall) that are generating an alert (Data hygiene - events in the future), I would like to know how I can confirm that the logs are arriving with the time in the futur... See more...
I have some Checkpoint logs (Firewall) that are generating an alert (Data hygiene - events in the future), I would like to know how I can confirm that the logs are arriving with the time in the future because they are coming with the time generated in the Checkpoint(Firewall) I tried using some SPLs but I don't know if that's right. Examples: SPL: | rest /services/data/indexes | search title=checkpoint | search totalEventCount > 0 | eval now=strftime(now(), "%Y-%m-%d") | stats first(maxTime) as "Earliest Event Time" first(minTime) as "Latest Event Time" first(now) as "Current Date" first(currentDBSizeMB) as currentDBSizeMB by title | rename title as "Index" | sort - currentDBSizeMB | eval "Index Size in GB"= round(currentDBSizeMB/1000,2) | table Index "Earliest Event Time" "Latest Event Time" "Current Date" "Index Size in GB" updated ======================================================================= OR this SPL: index=idx_checkpoint earliest=+5m latest=+10y | eval criationtimelog=strftime(creation_time,"%Y-%m-%d %H:%M:%S") | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") | table host _time indextime criationtimelog