All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Please help me with learning What dependencies dose Splunk Security Essentials App (SSE) has on ES & ES content updates Apps? I have posted this before but still not clear to me. I appreciate your fe... See more...
Please help me with learning What dependencies dose Splunk Security Essentials App (SSE) has on ES & ES content updates Apps? I have posted this before but still not clear to me. I appreciate your feedback. Thank u & Happy Thanksgiving to you & yours.
I know there is an option "advanced search" but I can't find an option there to exclude the links
I am using Splunk universal forwarder 8.1.1 on a linux server configured as a log aggregator.  I have 7 well defined sourcetypes defined on inputs.conf based on log files in the following directories... See more...
I am using Splunk universal forwarder 8.1.1 on a linux server configured as a log aggregator.  I have 7 well defined sourcetypes defined on inputs.conf based on log files in the following directories: /var/log/remote/LINUX, /var/log/remote/NETWORK, /var/log/remote/VMWARE.   inputs.conf for LINUX directory [monitor:///var/log/remote/LINUX/*.log host_regex = LINUX\/(.+)_.+\.log index=linux-log sourcetype=linux-messages disabled = 0   When I do a search I see sourcetypes like (in addition to ones defined in inputs.conf) cron cron-4 syslog cisco-4 I traced these back to learned sourcetypes.  The ciso-r sourcetype is looking at a file in /var/log/remote.  Given the sourcetypes I have defined I would not expect any visibility into that directory. Is there a way to disable the learned sourcetypes? Or whitelist the ones I want?  
Hi -  I have some data that looks like this, which ingests into splunk with no issues at all       11/24/2021 08:47:21.321,"category":"transaction","tc"="93","amount_approved":"9.99","amount_req... See more...
Hi -  I have some data that looks like this, which ingests into splunk with no issues at all       11/24/2021 08:47:21.321,"category":"transaction","tc"="93","amount_approved":"9.99","amount_requested":"493.95" etc etc etc 11/24/2021 08:45:14.121,"category":"transaction","tc"="93","amount_approved":"5.99","amount_requested":"5.99" etc etc etc 11/24/2021 08:45:14.121,"category":"transaction","tc"="01","amount_approved":"6.99","amount_requested":"6.99" etc etc etc       I want to do a a search to filter out the transactions to only see where the amounts differ       index=ABC sourcetype=XZX category=transaction tc=93 amount_approved!=amount_requested        That simple search doesn't work.     splunk is not filtering on the amount_approved!=amount_requested comparison.     In the example above I would get both "tc=93" transactions from the sample data , instead of just getting the first one. If I remove the amount_approved!=amount_requested  from the search and add it to a where clause like this     index=ABC sourcetype=XZX category=transaction tc=93 |where amount_approved!=amount_requested     it works fine as I only get 1 event back. What is wrong with my initial search line? I would like to not read in all of the transactions before I filter, hence the need to put the comparison on the search line. 
Hello Everyone, I am integrating logs from trend micro portable security  via HEC. As per the user guide of trend micro they need a HEC token that should have access to 5 indexes namely(sacnnedlog,... See more...
Hello Everyone, I am integrating logs from trend micro portable security  via HEC. As per the user guide of trend micro they need a HEC token that should have access to 5 indexes namely(sacnnedlog,detectedlog,applicationinfo,updateinfo,assetinfo) the names should not be changed as it will not be able to send logs . So I have created a HEC token with sourctype=trendmicro and have given access to all 5 indexes created on HF. Now the catch is in our splunk environment we cannot have 5 indexes for one source thus we have created 5 indexes at HF (same name as above) and we are trying to route all logs for sourcetype trendmicro to an index named app_trendmicro (created on Cluster master).  i have used following props and transforms In props:- [trendmicro] TRANSFORMS-routing = trendmicro_routing In transforms:- [trendmicro_routing] DEST_KEY = _MetaData:Index REGEX = . FORMAT = app_trendmicro however we are  not able to receive logs and getting error in internal index  as Received event for unconfigured/disabled/deleted index  
Hello, I have been trying to get a Splunk config to work for a while, and have come here for help! I'm out of ideas.   I have Network Syslog from many different  sources all being sent to a Heavy ... See more...
Hello, I have been trying to get a Splunk config to work for a while, and have come here for help! I'm out of ideas.   I have Network Syslog from many different  sources all being sent to a Heavy Forwarder. My hope is to get the syslog matched against two different regex's and have the matched data sent to two different locations. My Configs: props [host::*] TRANSFORMS-SYSLOG = send_to_serverA, send_to_serverB transforms [send_to_serverA] regex = "regex goes here" DEST_KEY = _SYSLOG_ROUTING FORMAT = serverA [send_to_serverB] regex = "regex goes here" DEST_KEY = _SYSLOG_ROUTING FORMAT = serverB outputs [syslog:serverA_group] server = x.x.x.1:514,x.x.x.2:514 [syslog:serverB_group] server = x.x.1.1:514,x.x.1.2:514   This is currently not working and it seems to have something to do with the DEST_KEY = _SYSLOG_ROUTING. I get some very strange results. Can any one point out where I have gone wrong? If this can be done?   Regards, Ryan
How do I extract all values from a json file containing a list with multiple strings with rex? The content of the field contains a list and a variable in stringform.  The number of items in the lis... See more...
How do I extract all values from a json file containing a list with multiple strings with rex? The content of the field contains a list and a variable in stringform.  The number of items in the list can vary and the length of items also.  The field is as follows: "{\"variable2\":[\"AB1234\",\"BA1234\",\"DCBA\",\"ABCD\"],\"name\":\"namegiven\"} In sofar, I was able to extract the field name with the following query. | rex field=field.subfield.body max_match=0 "\"name\"\:\"(?<name>[a-zA-Z]+)\"" Variable 2 is a list with multiple strings and this leaves me puzzled. It's not the expression to recognize the strings in the list, but I'm looking for a way to look inside the list, look for two different patterns and find all items in it.  Can someone help out?
Hello, thank you for taking the time to consider my question.  I currently have a working SPL search that retrieves IPv4 addresses from a CSV using an inputlookup function, which works tremendousl... See more...
Hello, thank you for taking the time to consider my question.  I currently have a working SPL search that retrieves IPv4 addresses from a CSV using an inputlookup function, which works tremendously fast when operating by itself, however when I plug that inputlookup into a larger outer search that would correlate those values with destination IPv4s seen and reported by our firewall provider it takes much much longer for those results to actually appear (usually 2> minutes total runtime, and that's only using the suspicious IPs in the CSV from just the day before...) Ideally this search would take less than a minute to complete, comparing around 25,000-30,000 IPv4s from the CSV with the several hundred that are reported by the firewall every 10 minutes or so.  The syntax for the search is below:     index=firewall earliest=-10m@m latest=now vsys_name=Browser [| inputlookup phishCatch.csv | rename "IPv4" as dest_ip | table dest_ip] | eval totalMBin=round(bytes_in/1024,2) | rename generated_time as "Time Received" user as "Username" client_ip as "Source IPv4 Address" action as "Action Taken" totalMBin as "Total MB In" dest_ip as "Suspicious IPv4" | table "Time Received", "Username","Source IPv4 Address","Suspicious IPv4","Total MB In","Action Taken"       I'm guessing that I will have to use some sort of acceleration to improve the speed, but I'm very much a Splunk novice and don't really understand datamodels or how Splunk acceleration actually works.  Any advice on how best to proceed and improve the efficiency and speed of this search would be greatly appreciated! Thanks in advance
Hi,  I need to use 24-hour clock instead of AM/PM on my dashboard studio tables and charts, but it automatically uses AM/PM. I have tried to change my locale on browser, but it only works on search ... See more...
Hi,  I need to use 24-hour clock instead of AM/PM on my dashboard studio tables and charts, but it automatically uses AM/PM. I have tried to change my locale on browser, but it only works on search but not on dashboard studio.  Any solutions appreciated! Thanks!
Hello, I have a setup similar to the example shown in this page, we noticed that the firewalls showing systematic tcp session breakdown/rebuild.    So it looks like the the default setting of autoLB... See more...
Hello, I have a setup similar to the example shown in this page, we noticed that the firewalls showing systematic tcp session breakdown/rebuild.    So it looks like the the default setting of autoLBFrequency=30 is in use. Further it looks like in the newer versions of the Splunk UF which we are on have deprecated the disabling LB functionality. Can I set this setting to 86400 or something like that so that it doesn't break and recreate connections all the time?  Are there any pitfalls with this approach?  Are there any other hacks that will allow me to disable LB which makes no sense if a group has just 1 IDX in it? https://docs.splunk.com/Documentation/Forwarder/8.2.3/Forwarder/Configureforwardingwithoutputs.conf [tcpout] defaultGroup=indexer1,indexer2 [tcpout:indexer1] server=10.1.1.197:9997 [tcpout:indexer2] server=10.1.1.200:9997  
After upgrading our environment from 8.1.3 to 8.2.3, some searches return "StatsFileWriterLz4 file open failed". Our environment is one Search Head, one Indexer, one Cluster Master connected in an in... See more...
After upgrading our environment from 8.1.3 to 8.2.3, some searches return "StatsFileWriterLz4 file open failed". Our environment is one Search Head, one Indexer, one Cluster Master connected in an indexer cluster, with one Heavy forwarder and a few Universal Forwarders. All servers are running on Windows 2019.  A specific error message looks like this: "StatsFileWriterLz4 file open failed file=E:\Splunk\var\run\splunk\srtemp\374915603_1292_at_1637749050.1\statstmp_merged_5.sb.lz4" after running this tstats search: | tstats max(_time) as time max(_indextime) as indextime where index IN ("operations_script_log","telemetry*") sourcetype=* earliest=-30d@d latest=@d by sourcetype host source date_year date_month date_mday date_hour date_minute This search completed and returned the desired results prior to the upgrade, but won't complete after. It runs for about nine seconds before the "StatsFileWriterLz4 file open failed" error message appears and stops the search. Has anybody encountered this problem before and know a solution?
Hello,   Can you tell me please why the below does not work? | rest splunk_server=local servicesNS/-/-/data/ui/views/ | where update > relative_time(now(),"-10d@d")   I want to search the dashb... See more...
Hello,   Can you tell me please why the below does not work? | rest splunk_server=local servicesNS/-/-/data/ui/views/ | where update > relative_time(now(),"-10d@d")   I want to search the dashboards that were updated in the last 10 days but it does not seem to return anything. Is it because I need to fix the timestamp format?   Thanks!
Hello guys i'm new on splunk and I would like to know if it was possible to view the logs of a date on each page. Would it be possible to have 4 pages? Considering that it is not possible for... See more...
Hello guys i'm new on splunk and I would like to know if it was possible to view the logs of a date on each page. Would it be possible to have 4 pages? Considering that it is not possible for me to choose the number of logs to show per page because I do not know how many logs per date I could have, it is a coincidence that there are 2 per date. Thanks to all.  
Hello, I'm having some issues finding a proper solution for this problem: I created a custom alert action for a Splunk v6.6.3 (I know, it is veery old) that executes a python script that use an SPL... See more...
Hello, I'm having some issues finding a proper solution for this problem: I created a custom alert action for a Splunk v6.6.3 (I know, it is veery old) that executes a python script that use an SPL query through Splunk REST API. At the moment the used credentials for Splunk REST API are written in cleartext into the script; is there a way to encrypt them so they are not clearly visible to other users able to read the script code? Token authentication would have been the proper solution but it's not available in this old splunk version. Do you know any additional solutions for this issue?   Thank you in advice, have a good day!  
Hello! I love the Lookup File Editor app https://splunkbase.splunk.com/app/1724/  ...but for some time it has behaved weird for me on our SplunkCloud instance, especially annoying in that it doesn´... See more...
Hello! I love the Lookup File Editor app https://splunkbase.splunk.com/app/1724/  ...but for some time it has behaved weird for me on our SplunkCloud instance, especially annoying in that it doesn´t list any of the lookups/kvstores ie /en-GB/app/lookup_editor/lookup_list is completely empty. I can´t tell you exactly when it stopped behaving but likely 4-5months ago, just haven´t had time to look into it... There is an information icon in top right corner that upon clicking reads: "Dashboard Updated This dashboard has been updated. If you have any issues with this dashboard, contact the dashboard's owner. You can also temporarily open a previous view of this dashboard."  That used to open up an list of the lookups and kvstores. But upon updating to latest version 3.5 today ( and subsequently uninstalling and reinstalling the app again in troubleshooting) now just gives me another blank list even when using the temporary link ie its not completely useless. Sharing is "Global", Permissions set so "Everyone" has "Read" and "sc_admin" has "Write". App is Enabled. Health dashboards for "Status" and "Logs" doesnt say anything. Tested to create lookups and kvstores in multiple apps both before and after, nothing shows in the list. Any ideas on what is wrong? Any tips on what I can look for in troubleshoot would be apprechiated! Best regards, Victor
Hi, I can't understand why this query works in search while if I insert it in the dashboard it doesn't work. I assign the option chosen in the filter to the month variable, then I verify the choice... See more...
Hi, I can't understand why this query works in search while if I insert it in the dashboard it doesn't work. I assign the option chosen in the filter to the month variable, then I verify the choice and, based on what has been chosen, I have the sum of the X_MESE_PREVIOUS columns or the chosen month returned. The options are "Solar Year", "Fiscal Year" and the months. Where am I wrong? Why it does not work ? Tks Antonio --------------------------------------------------------- | loadjob savedsearch="antonio:enterprise:20211025_PASSAGGIO_AGGREGATO_DATE" |where (sourcetype="fs_ampliamenti_ip" AND OFFERTA="DIRETTA" AND STATO="OK") OR (sourcetype= "fs_diretta" AND TIPOLOGIA="SUBNETIP" AND OFFERTA="DIRETTA" AND STATO="OK") |eval MESEATTUALE=strftime(relative_time(now(), "-0d@d"), "%m") |eval MESEATTUALE= 11 |eval mese="Anno Fiscale" (manual setting of the chosen filter) |eval ANNOFISCALE=if(MESEATTUALE -3 <= 0,MESEATTUALE-3+12,MESEATTUALE-3) |rename PROGRESSIVO_MESE as "0_MESE_PRECEDENTE" |eval SOLARE = mvappend($0_MESE_PRECEDENTE$,$1_MESE_PRECEDENTE$,$2_MESE_PRECEDENTE$,$3_MESE_PRECEDENTE$,$4_MESE_PRECEDENTE$,$5_MESE_PRECEDENTE$,$6_MESE_PRECEDENTE$,$7_MESE_PRECEDENTE$, $8_MESE_PRECEDENTE$,$9_MESE_PRECEDENTE$,$10_MESE_PRECEDENTE$,$11_MESE_PRECEDENTE$,$12_MESE_PRECEDENTE$) | eval FISCALE=0 | foreach *_MESE_PRECEDENTE [|eval FISCALE = if (<<MATCHSTR>> < ANNOFISCALE, FISCALE + '<<FIELD>>', FISCALE)] | eval CHI=case( mese="0_MESE_PRECEDENTE", $0_MESE_PRECEDENTE$ , mese="1_MESE_PRECEDENTE", $1_MESE_PRECEDENTE$ , mese="2_MESE_PRECEDENTE",$2_MESE_PRECEDENTE$ , mese="3_MESE_PRECEDENTE",$3_MESE_PRECEDENTE$ , mese="4_MESE_PRECEDENTE",$4_MESE_PRECEDENTE$ , mese="5_MESE_PRECEDENTE",$5_MESE_PRECEDENTE$ , mese="6_MESE_PRECEDENTE",$6_MESE_PRECEDENTE$ , mese="7_MESE_PRECEDENTE",$7_MESE_PRECEDENTE$ , mese="8_MESE_PRECEDENTE",$8_MESE_PRECEDENTE$ , mese="9_MESE_PRECEDENTE",$9_MESE_PRECEDENTE$ , mese="10_MESE_PRECEDENTE",$10_MESE_PRECEDENTE$ , mese="11_MESE_PRECEDENTE",$11_MESE_PRECEDENTE$ , mese="12_MESE_PRECEDENTE",$12_MESE_PRECEDENTE$ , mese="Anno Solare",$SOLARE$ , mese="Anno Fiscale",$FISCALE$ , 1=1, "INV") |eval RIS = case( mese = "Anno Fiscale", FISCALE, mese = "Anno Solare", SOLARE, 1=1, CHI) | stats sum(RIS) as RISULTATO |table RISULTATO ------------------------------------------------ <query>| loadjob savedsearch="antonio:enterprise:20211025_PASSAGGIO_AGGREGATO_DATE" |where (sourcetype="fs_ampliamenti_ip" AND OFFERTA="DIRETTA" AND STATO="OK") OR (sourcetype= "fs_diretta" AND TIPOLOGIA="SUBNETIP" AND OFFERTA="DIRETTA" AND STATO="OK") |eval MESEATTUALE=strftime(relative_time(now(), "-0d@d"), "%m") |eval mese="$previousmonth$" (this is the token, chosen filter) |eval ANNOFISCALE=if(MESEATTUALE -3 &lt;= 0,MESEATTUALE-3+12,MESEATTUALE-3) |rename PROGRESSIVO_MESE as "0_MESE_PRECEDENTE" |eval SOLARE = mvappend($$0_MESE_PRECEDENTE$$,$$1_MESE_PRECEDENTE$$,$$2_MESE_PRECEDENTE$$,$$3_MESE_PRECEDENTE$$,$$4_MESE_PRECEDENTE$$,$$5_MESE_PRECEDENTE$$,$$6_MESE_PRECEDENTE$$,$$7_MESE_PRECEDENTE$$, $$8_MESE_PRECEDENTE$$,$$9_MESE_PRECEDENTE$$,$$10_MESE_PRECEDENTE$$,$$11_MESE_PRECEDENTE$$,$$12_MESE_PRECEDENTE$$) | eval FISCALE=0 | foreach *_MESE_PRECEDENTE [|eval FISCALE = if (&lt;&lt;MATCHSTR&gt;&gt; &lt; ANNOFISCALE, FISCALE + '&lt;&lt;FIELD&gt;&gt;', FISCALE)] | eval CHI=case( mese="0_MESE_PRECEDENTE", $$0_MESE_PRECEDENTE$$ , mese="1_MESE_PRECEDENTE", $$1_MESE_PRECEDENTE$$ , mese="2_MESE_PRECEDENTE",$$2_MESE_PRECEDENTE$$ , mese="3_MESE_PRECEDENTE",$$3_MESE_PRECEDENTE$$ , mese="4_MESE_PRECEDENTE",$$4_MESE_PRECEDENTE$$ , mese="5_MESE_PRECEDENTE",$$5_MESE_PRECEDENTE$$ , mese="6_MESE_PRECEDENTE",$$6_MESE_PRECEDENTE$$ , mese="7_MESE_PRECEDENTE",$$7_MESE_PRECEDENTE$$ , mese="8_MESE_PRECEDENTE",$$8_MESE_PRECEDENTE$$ , mese="9_MESE_PRECEDENTE",$$9_MESE_PRECEDENTE$$ , mese="10_MESE_PRECEDENTE",$$10_MESE_PRECEDENTE$$ , mese="11_MESE_PRECEDENTE",$$11_MESE_PRECEDENTE$$ , mese="12_MESE_PRECEDENTE",$$12_MESE_PRECEDENTE$$ , mese="Anno Solare",$$SOLARE$$ , mese="Anno Fiscale",$$FISCALE$$ , 1=1, "INV") |eval RIS = case( mese = "Anno Fiscale", FISCALE, mese = "Anno Solare", SOLARE, 1=1, CHI) | stats sum(RIS) as RISULTATO |table RISULTATO</query>
How to sum the values of a multivalue token in Simple XML? Let's say you have a mv token named test1 with values of: 1,2,3 How to achieve something like: <eval token="test2">sum($test1$)</eval> T... See more...
How to sum the values of a multivalue token in Simple XML? Let's say you have a mv token named test1 with values of: 1,2,3 How to achieve something like: <eval token="test2">sum($test1$)</eval> Thanks!
Hi All,  I have a log with 3 event inside of it, ( you can see it on the screenshot, I paste the sample logs here : https://regex101.com/r/EvmMeR/1 1st Event -  Short event  2nd Event - Short eve... See more...
Hi All,  I have a log with 3 event inside of it, ( you can see it on the screenshot, I paste the sample logs here : https://regex101.com/r/EvmMeR/1 1st Event -  Short event  2nd Event - Short event and multi-line 3rd Event - VERY LONG and multi-line, need to be dropped as per the client. I manage to DROP the 3rd event by finding the LOGS that are greater than 2000 characters. The problem is , I dropped the event but Splunk still raise the issue: 11-24-2021 08:02:57.049 +0000 WARN LineBreakingProcessor [6453 parsing] - Truncating line because limit of 2000 bytes has been exceeded with a line length >= 55179 - data_source="SAMPLETOSHARE.txt", data_host="5bfd55dbdcdd", data_sourcetype="sample"   Is there a way to stop splunk from flagging issue for those logs that  was dropped ? 
Hi , A user is complaining that : From hostname1, we are pushing the syslog to Splunk indexer server IP - 10.20.30.40 via Port 55XY, can you please check if anything needs to be done from Splunk e... See more...
Hi , A user is complaining that : From hostname1, we are pushing the syslog to Splunk indexer server IP - 10.20.30.40 via Port 55XY, can you please check if anything needs to be done from Splunk end to see the data in Splunk. Can anyone please me on this. Regards, Rahul
Dear Professor, I have two alert search like this 1. Search 1: index="abc" sourcetype="abc" service.name=financing request.method="POST" request.uri="*/applications" response.status="200" |timech... See more...
Dear Professor, I have two alert search like this 1. Search 1: index="abc" sourcetype="abc" service.name=financing request.method="POST" request.uri="*/applications" response.status="200" |timechart span=2m count as applicaton_today |eval mytime=strftime(_time,"%Y-%m-%dT%H:%M") |eval yesterday_time=strftime(_time,"%H:%M") |fields _time,yesterday_time,applicaton_today And here is output 2. Search 2: index="xyz" sourcetype="xyz" "Application * sent to xyz success" |timechart span=2m count as omni_today |eval mytime=strftime(_time,"%Y-%m-%dT%H:%M") |eval yesterday_time=strftime(_time,"%H:%M") |fields _time,yesterday_time,omni_today And here is output  3. I try to combine two search like this then calculate spike. index="abc" sourcetype="abc" service.name=financing request.method="POST" request.uri="*/applications" response.status="200" |timechart span=2m count as app_today |eval mytime=strftime(_time,"%Y-%m-%dT%H:%M") |eval yesterday_time=strftime(_time,"%H:%M") | append [search index="xyz" sourcetype="xyz" "Application * sent to xyz" | timechart span=2m count as omni_today] |fields _time,yesterday_time,app_today,omni_today |eval spike=if(omni_today < app_today AND _time <= now() - 3*60 AND _time >= relative_time(now(),"@d") + 7.5*3600, 1, 0) Here is output  But it shows two time span (like image).  How can I combine two search with only time span like this.   Thank you for your help.