All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunk Community,    I have a merged event which shows if a service is running or down. Here is an example of the event in splunk:   ********************************************************... See more...
Hello Splunk Community,    I have a merged event which shows if a service is running or down. Here is an example of the event in splunk:   ******************************************************************************* All services are running 1092827|default|service1is running 37238191|default|service2 is running 16272373|default|service3 is running *******************************************************************************   How can I split the merged events so I can extract the service name, status (running/down) & host? 16272373|default|service3 is running Host |      | ServiceName is Status
Greetings, I was told by my instructor to use your product for an assignment, however, I am not getting the results that are shown.  It seems as if Splunk is not reading the data from my files.  I w... See more...
Greetings, I was told by my instructor to use your product for an assignment, however, I am not getting the results that are shown.  It seems as if Splunk is not reading the data from my files.  I was able to add the data but when I perform the search, it returns zero results.  Attached is a screenshot of what it should look like.  How do I accurately import my files?     Here is a screenshot of what my results are showing: Please help.   Thanks, Melissa
I have two searches, one to train ML model  and second to apply the model. I would like to run them in sequence, first train ML search should run  and after that apply ML search  should run, for each... See more...
I have two searches, one to train ML model  and second to apply the model. I would like to run them in sequence, first train ML search should run  and after that apply ML search  should run, for each day.  In current splunk backfill script, it backfills first Train ML search for whole time period selected for backfill, then it start backfill  for apply ML model. Is there any way to solve the issue?
I have splunk search - index=cloud EventName: "Error Occurred" XChangeToSalesForce | rename message as "Message" _time as Time | table Time,Message When i search on splunk search, i get the below re... See more...
I have splunk search - index=cloud EventName: "Error Occurred" XChangeToSalesForce | rename message as "Message" _time as Time | table Time,Message When i search on splunk search, i get the below response 1637759064  Multiple Terms found for the same agency. Agency code:  But when the email is sent, i get nothing on the message field Time Message 1637759064    
Hello, I am trying to execute the following query but keep getting... Error in 'eval' command: The expression is malformed. Expected AND. . . . | streamstats current=f last(_time) as last_time by h... See more...
Hello, I am trying to execute the following query but keep getting... Error in 'eval' command: The expression is malformed. Expected AND. . . . | streamstats current=f last(_time) as last_time by host | eval gap = last_time - _time | where gap > 50 | convert ctime(last_time) as last_time | eval refresh_seconds = (avg(last_time) / 1000) as refresh_minutes What am I doing wrong?    
I am using a chart command to get a list of IP's and servers with an error. I am attempting to only get the top 10 results. For some reason when I do the top for IP I do not get results but if I do i... See more...
I am using a chart command to get a list of IP's and servers with an error. I am attempting to only get the top 10 results. For some reason when I do the top for IP I do not get results but if I do it for server I get results. index=foo result=error | chart count by server, ip | top limit=10 ip  
First time installer of Qualys-TA. After completing all the setup in UI, i ran the command (as mentioned in page 26 of the docs: https://www.qualys.com/docs/qualys-ta-for-splunk.pdf " cd $SPLUNK... See more...
First time installer of Qualys-TA. After completing all the setup in UI, i ran the command (as mentioned in page 26 of the docs: https://www.qualys.com/docs/qualys-ta-for-splunk.pdf " cd $SPLUNK_HOME/etc/apps/TA-QualysCloudPlatform $SPLUNK_HOME/bin/splunk cmd python ./bin/run.py -k -s -u   <qualys username> -p <qualys password> " This throws an error in log ($SPLUNK_HOME/var/log/splunk/ta_QualysCloudPlatform.log)  as follows: qualysModule.splunkpopulator.basepopulator.BasePopulatorException: could not load API response. Reason: 'Event' object has no attribute 'write_event' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualysModule/qualys_log_populator.py", line 240, in _run qlogger.error(e.message) AttributeError: 'BasePopulatorException' object has no attribute 'message' When i added more debug info to the various python scripts, i saw that the error pointed to "NoneType" for self.EVENT_WRITER.   The above log contained more info as below: TA-QualysCloudPlatform: 2021-11-24 15:09:52 PID=564017 [MainThread] INFO: Python interpreter version = 3 TA-QualysCloudPlatform: 2021-11-24 15:09:52 PID=564017 [MainThread] INFO: Qualys TA version=1.8.11 TA-QualysCloudPlatform: 2021-11-24 15:09:52 PID=564017 [MainThread] INFO: Running for policy_posture_info. Host name to be used: $decideOnStartup. Index configured: main. Run duration: 9 * * * *. Default start date: 1999-01-01T00:00:00Z. TA-QualysCloudPlatform: 2021-11-24 15:09:52 PID=564017 [MainThread] INFO: TA-QualysCloudPlatform using username trann3ls73 and its associated password. TA-QualysCloudPlatform: 2021-11-24 15:09:52 PID=564017 [MainThread] INFO: API URL changed to https://qualysguard.qg3.apps.qualys.com for policy_posture_info data input TA-QualysCloudPlatform: 2021-11-24 15:09:52 PID=564017 [MainThread] INFO: Another instance of policy_posture_info is already running with PID 508724. I am exiting.   on doing ps-ax | grep splunk, i could see many instances running as below:   root@splunktest:/opt/splunk/etc/apps/TA-QualysCloudPlatform/tmp# ps ax | grep splunk 12657 ? Sl 15:28 splunkd -p 8090 start 12658 ? Ss 0:00 [splunkd pid=12657] splunkd -p 8090 start [process-runner] 508681 ? S 0:00 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualys.py 508724 ? S 0:00 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualys.py 508734 ? S 0:00 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualys.py 508908 ? S 0:21 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualys.py 555183 ? S 0:00 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualys.py 555192 ? S 0:00 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualys.py 555219 ? S 0:00 /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualys.py 565505 ? Sl 0:15 splunkd -p 8089 restart 565506 ? Ss 0:00 [splunkd pid=565505] splunkd -p 8089 restart [process-runner]   Finally, after killing those PIDs , i could get rid of the error. This really needs to be fixed or a proper troubleshooting must be documented as it caused me headaches for 2 whole days! Thanks!
Hello, I just configured a new Custom Threat Intelligence feed in Splunk Enterprise Security and I'm getting a strange error in the audit view: 2021-11-24 10:31:04,387+0000 ERROR pid=78967 tid=Main... See more...
Hello, I just configured a new Custom Threat Intelligence feed in Splunk Enterprise Security and I'm getting a strange error in the audit view: 2021-11-24 10:31:04,387+0000 ERROR pid=78967 tid=MainThread file=base_modinput.py:execute:820 | Execution failed: 'ThreatlistModularInput' object has no attribute 'file_path' Traceback (most recent call last): File "/opt/splunk/etc/apps/SA-Utils/lib/SolnCommon/modinput/base_modinput.py", line 811, in execute log_exception_and_continue=True File "/opt/splunk/etc/apps/SA-Utils/lib/SolnCommon/modinput/base_modinput.py", line 388, in do_run self.run(stanza) File "/opt/splunk/etc/apps/SA-ThreatIntelligence/bin/threatlist.py", line 679, in run self.execute_workloads(stanza, args, last_run) File "/opt/splunk/etc/apps/SA-ThreatIntelligence/bin/threatlist.py", line 587, in execute_workloads 'file_path': self.file_path, AttributeError: 'ThreatlistModularInput' object has no attribute 'file_path' The URL of the feed is :https://api.maltiverse.com/collection/uYxZknEB8jmkCY9eQoUJ/download?filetype=splunk-ipv4&token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJleHAiOjIyNjg0OTQ3NTEsImlhdCI6MTYzNzc3NDc1MSwic3ViIjo5MDMwfQ.mpM7tahLJEtUoM7fhwYzoHvQSOIuMTQVtCyGAEBDj3g And as you can notice it is a CSV where column 1 is the description and the second is the IP address, so filling up the formulary in the Threat Intelligence module in Splunk ES with the following format: Field Value File parser auto Delimiting regular expression , Extracting regular expression   Fields description:$1,ip:$2 Ignorign regular expression (^#|^\s*$) Skip header lines 1 Intelligence file encoding UTF8 Sinkhole Yes   Can anybody help me out? Thanks in advance
Good afternoon everyone! I'm hoping someone can assist in shedding some light on the following issue. I'm getting the following error : "Error in 'eval' command: The expression is malformed. Expec... See more...
Good afternoon everyone! I'm hoping someone can assist in shedding some light on the following issue. I'm getting the following error : "Error in 'eval' command: The expression is malformed. Expected )." and I'm uncertain why it isn't functioning. Perhaps there's something I'm missing or I've attempted this eval incorrectly. Any assistance would be greatly appreciated! I've provided the end part of my overall query, the objective here is to determine if there's an increase in the SCORE by 5% each day over a 3 day period, and if there is on any of the days, then flag it.  | eval SCORE=(Asales + Bsales / ITEM_COUNT) | stats dc(ITEM) as ITEM_COUNT  values(ITEM) as ITEM sum(SALES) as TotalSales sum(SCORE) as SCORE by _time TITLE | where ITEM_COUNT > 1 | eval 0DATE=if(_time >= relative_time(now(), "-1d@-d"),SCORE,0) | eval 1DATE=if(_time >= relative_time(now(), "-2d@-1d"),SCORE,0) | eval 2DATE=if(_time >= relative_time(now(), "-3d@-2d"),SCORE,0) | eval 1TREND=if(1DATE > 0DATE*1.05,1,0) | eval 2TREND=if(2DATE > 0DATE*1.05,1,0) | eval BREAK=if((2DATE+3DATE) > 0,"TRUE","FALSE") | table * everything works up to | where ITEM_COUNT > 1 and I'm getting results as anticipated. but the eval itself is failing. I've also attempted to add each of these evals via an appendpipe but to no avail.  Thanks in advance!
Hi, I created a dashboard using javascript tabs and would like to have the panels when clicked, send users to the corresponding tab where they can see more details related to that panel. For example,... See more...
Hi, I created a dashboard using javascript tabs and would like to have the panels when clicked, send users to the corresponding tab where they can see more details related to that panel. For example, if I were to click panel A, it sends me to tab A and load all the panels associated with panel A. Is this something that is possible since the tabs use javascript? Here is also a link to the tabs tutorial I followed if it helps: https://www.splunk.com/en_us/blog/tips-and-tricks/making-a-dashboard-with-tabs-and-searches-that-run-when-clicked.html   Thanks
Please help me with learning What dependencies dose Splunk Security Essentials App (SSE) has on ES & ES content updates Apps? I have posted this before but still not clear to me. I appreciate your fe... See more...
Please help me with learning What dependencies dose Splunk Security Essentials App (SSE) has on ES & ES content updates Apps? I have posted this before but still not clear to me. I appreciate your feedback. Thank u & Happy Thanksgiving to you & yours.
I know there is an option "advanced search" but I can't find an option there to exclude the links
I am using Splunk universal forwarder 8.1.1 on a linux server configured as a log aggregator.  I have 7 well defined sourcetypes defined on inputs.conf based on log files in the following directories... See more...
I am using Splunk universal forwarder 8.1.1 on a linux server configured as a log aggregator.  I have 7 well defined sourcetypes defined on inputs.conf based on log files in the following directories: /var/log/remote/LINUX, /var/log/remote/NETWORK, /var/log/remote/VMWARE.   inputs.conf for LINUX directory [monitor:///var/log/remote/LINUX/*.log host_regex = LINUX\/(.+)_.+\.log index=linux-log sourcetype=linux-messages disabled = 0   When I do a search I see sourcetypes like (in addition to ones defined in inputs.conf) cron cron-4 syslog cisco-4 I traced these back to learned sourcetypes.  The ciso-r sourcetype is looking at a file in /var/log/remote.  Given the sourcetypes I have defined I would not expect any visibility into that directory. Is there a way to disable the learned sourcetypes? Or whitelist the ones I want?  
Hi -  I have some data that looks like this, which ingests into splunk with no issues at all       11/24/2021 08:47:21.321,"category":"transaction","tc"="93","amount_approved":"9.99","amount_req... See more...
Hi -  I have some data that looks like this, which ingests into splunk with no issues at all       11/24/2021 08:47:21.321,"category":"transaction","tc"="93","amount_approved":"9.99","amount_requested":"493.95" etc etc etc 11/24/2021 08:45:14.121,"category":"transaction","tc"="93","amount_approved":"5.99","amount_requested":"5.99" etc etc etc 11/24/2021 08:45:14.121,"category":"transaction","tc"="01","amount_approved":"6.99","amount_requested":"6.99" etc etc etc       I want to do a a search to filter out the transactions to only see where the amounts differ       index=ABC sourcetype=XZX category=transaction tc=93 amount_approved!=amount_requested        That simple search doesn't work.     splunk is not filtering on the amount_approved!=amount_requested comparison.     In the example above I would get both "tc=93" transactions from the sample data , instead of just getting the first one. If I remove the amount_approved!=amount_requested  from the search and add it to a where clause like this     index=ABC sourcetype=XZX category=transaction tc=93 |where amount_approved!=amount_requested     it works fine as I only get 1 event back. What is wrong with my initial search line? I would like to not read in all of the transactions before I filter, hence the need to put the comparison on the search line. 
Hello Everyone, I am integrating logs from trend micro portable security  via HEC. As per the user guide of trend micro they need a HEC token that should have access to 5 indexes namely(sacnnedlog,... See more...
Hello Everyone, I am integrating logs from trend micro portable security  via HEC. As per the user guide of trend micro they need a HEC token that should have access to 5 indexes namely(sacnnedlog,detectedlog,applicationinfo,updateinfo,assetinfo) the names should not be changed as it will not be able to send logs . So I have created a HEC token with sourctype=trendmicro and have given access to all 5 indexes created on HF. Now the catch is in our splunk environment we cannot have 5 indexes for one source thus we have created 5 indexes at HF (same name as above) and we are trying to route all logs for sourcetype trendmicro to an index named app_trendmicro (created on Cluster master).  i have used following props and transforms In props:- [trendmicro] TRANSFORMS-routing = trendmicro_routing In transforms:- [trendmicro_routing] DEST_KEY = _MetaData:Index REGEX = . FORMAT = app_trendmicro however we are  not able to receive logs and getting error in internal index  as Received event for unconfigured/disabled/deleted index  
Hello, I have been trying to get a Splunk config to work for a while, and have come here for help! I'm out of ideas.   I have Network Syslog from many different  sources all being sent to a Heavy ... See more...
Hello, I have been trying to get a Splunk config to work for a while, and have come here for help! I'm out of ideas.   I have Network Syslog from many different  sources all being sent to a Heavy Forwarder. My hope is to get the syslog matched against two different regex's and have the matched data sent to two different locations. My Configs: props [host::*] TRANSFORMS-SYSLOG = send_to_serverA, send_to_serverB transforms [send_to_serverA] regex = "regex goes here" DEST_KEY = _SYSLOG_ROUTING FORMAT = serverA [send_to_serverB] regex = "regex goes here" DEST_KEY = _SYSLOG_ROUTING FORMAT = serverB outputs [syslog:serverA_group] server = x.x.x.1:514,x.x.x.2:514 [syslog:serverB_group] server = x.x.1.1:514,x.x.1.2:514   This is currently not working and it seems to have something to do with the DEST_KEY = _SYSLOG_ROUTING. I get some very strange results. Can any one point out where I have gone wrong? If this can be done?   Regards, Ryan
How do I extract all values from a json file containing a list with multiple strings with rex? The content of the field contains a list and a variable in stringform.  The number of items in the lis... See more...
How do I extract all values from a json file containing a list with multiple strings with rex? The content of the field contains a list and a variable in stringform.  The number of items in the list can vary and the length of items also.  The field is as follows: "{\"variable2\":[\"AB1234\",\"BA1234\",\"DCBA\",\"ABCD\"],\"name\":\"namegiven\"} In sofar, I was able to extract the field name with the following query. | rex field=field.subfield.body max_match=0 "\"name\"\:\"(?<name>[a-zA-Z]+)\"" Variable 2 is a list with multiple strings and this leaves me puzzled. It's not the expression to recognize the strings in the list, but I'm looking for a way to look inside the list, look for two different patterns and find all items in it.  Can someone help out?
Hello, thank you for taking the time to consider my question.  I currently have a working SPL search that retrieves IPv4 addresses from a CSV using an inputlookup function, which works tremendousl... See more...
Hello, thank you for taking the time to consider my question.  I currently have a working SPL search that retrieves IPv4 addresses from a CSV using an inputlookup function, which works tremendously fast when operating by itself, however when I plug that inputlookup into a larger outer search that would correlate those values with destination IPv4s seen and reported by our firewall provider it takes much much longer for those results to actually appear (usually 2> minutes total runtime, and that's only using the suspicious IPs in the CSV from just the day before...) Ideally this search would take less than a minute to complete, comparing around 25,000-30,000 IPv4s from the CSV with the several hundred that are reported by the firewall every 10 minutes or so.  The syntax for the search is below:     index=firewall earliest=-10m@m latest=now vsys_name=Browser [| inputlookup phishCatch.csv | rename "IPv4" as dest_ip | table dest_ip] | eval totalMBin=round(bytes_in/1024,2) | rename generated_time as "Time Received" user as "Username" client_ip as "Source IPv4 Address" action as "Action Taken" totalMBin as "Total MB In" dest_ip as "Suspicious IPv4" | table "Time Received", "Username","Source IPv4 Address","Suspicious IPv4","Total MB In","Action Taken"       I'm guessing that I will have to use some sort of acceleration to improve the speed, but I'm very much a Splunk novice and don't really understand datamodels or how Splunk acceleration actually works.  Any advice on how best to proceed and improve the efficiency and speed of this search would be greatly appreciated! Thanks in advance
Hi,  I need to use 24-hour clock instead of AM/PM on my dashboard studio tables and charts, but it automatically uses AM/PM. I have tried to change my locale on browser, but it only works on search ... See more...
Hi,  I need to use 24-hour clock instead of AM/PM on my dashboard studio tables and charts, but it automatically uses AM/PM. I have tried to change my locale on browser, but it only works on search but not on dashboard studio.  Any solutions appreciated! Thanks!
Hello, I have a setup similar to the example shown in this page, we noticed that the firewalls showing systematic tcp session breakdown/rebuild.    So it looks like the the default setting of autoLB... See more...
Hello, I have a setup similar to the example shown in this page, we noticed that the firewalls showing systematic tcp session breakdown/rebuild.    So it looks like the the default setting of autoLBFrequency=30 is in use. Further it looks like in the newer versions of the Splunk UF which we are on have deprecated the disabling LB functionality. Can I set this setting to 86400 or something like that so that it doesn't break and recreate connections all the time?  Are there any pitfalls with this approach?  Are there any other hacks that will allow me to disable LB which makes no sense if a group has just 1 IDX in it? https://docs.splunk.com/Documentation/Forwarder/8.2.3/Forwarder/Configureforwardingwithoutputs.conf [tcpout] defaultGroup=indexer1,indexer2 [tcpout:indexer1] server=10.1.1.197:9997 [tcpout:indexer2] server=10.1.1.200:9997