All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I found the following search to identify Missing / New sourcetypes and made a few changes. I am getting data and my next enhancement is to add the latest date/time a sourcetype was 'seen'. Here is... See more...
I found the following search to identify Missing / New sourcetypes and made a few changes. I am getting data and my next enhancement is to add the latest date/time a sourcetype was 'seen'. Here is the search I am starting with: index=anIndex earliest=-4d latest=now | eval recent=if(_time>(now()-129600 ),1,0) ```<--- No Logs in 1.5 Days ``` | stats count(eval(recent=1)) AS CurrentCount count(eval(recent=0)) AS HistoricalCount BY sourcetype host | where ( (CurrentCount < 1 AND HistoricalCount > 0) OR ( CurrentCount > 0 AND HistoricalCount < 1)) ```<--- Missing & New``` | eval status=case(CurrentCount > 0 AND HistoricalCount > 0, "OK", CurrentCount < 1 AND HistoricalCount > 0, "MISSING", CurrentCount > 0 AND HistoricalCount < 1, "NEW", 1=1,"Unknown" ) | sort sourcetype | table status host sourcetype CurrentCount HistoricalCount I think this is returning the last time a sourcetype was seen: index=anIndex earliest=-2d latest=now | stats max(_time) as last_searched by sourcetype host | eval lastTime=strftime(last_searched, "%m/%d/%y %H:%M:%S") | sort sourcetype host | table host sourcetype lastTime But, when I try to add these two lines into my original query I do not get any data ? | stats max(_time) as last_searched by sourcetype host | eval lastTime=strftime(last_searched, "%m/%d/%y %H:%M:%S") I have tried placing it in several different places but always get 'No Results found...' What am I missing ?
We would like to send some additional data during playbook execution to phantom indexes.  Is there a python library or syntax that we can use to write output to an index?
I want to search from a lookup table, get a field, and compare it to a search and pull the fields from that search based off of a common field. I would rather not use |set diff and its currently only... See more...
I want to search from a lookup table, get a field, and compare it to a search and pull the fields from that search based off of a common field. I would rather not use |set diff and its currently only showing the data from the inputlookup.         | set diff [| inputlookup all_mid-tiers WHERE host="ACN*" | fields username Unit ] [ search index=iis [| inputlookup all_mid-tiers WHERE host="ACN*" | fields username ] | dedup username | dedup SiteIDOverride | eval username=lower(username) | fields username SiteIDOverride unitType installVer os jkversion ] | join type=left [ search index="iis" sourcetype="iis" earliest=-7d@d [| inputlookup all_mid-tiers Where host="*ACN*" | fields username] | dedup username | eval username=lower(username) | eval timedelta=now()-_time | eval time_delta_days=floor(timedelta/86400) | stats first(time_delta_days) as Status by username | eval Status=if(Status<"0","0",Status) | eval StatA=Status | rangemap field=StatA OK=0-0 Monitor=1-1 Contact=2-9999 | rename range as Status ] | lookup all_mid-tiers host AS SiteIDOverride OUTPUT Unit Weaponsystem Last_access | eval Last_access=strftime(Last_access, "%Y-%m-%d") | rename Weaponsystem as unitType | dedup Unit | table Status Unit SiteIDOverride unitType installVer os jkversion Last_access     I can't seem to get it to pull SiteIDOverride unitType...^^ from the search. 
I am having trouble with using the time chart command effectively to make count of all workstations and with them broken down by location over time.  Currently my search is displaying each count of... See more...
I am having trouble with using the time chart command effectively to make count of all workstations and with them broken down by location over time.  Currently my search is displaying each count of every workstation by location, but instead  I am trying to have a sum count of the workstations displayed over every day. This is the current search.     index=main $WSprefix$ sourcetype=syslog process=elcsend "\"config " CentOS | rex "([^!]*!){2}(?P<type>[^!]*)!([^!]*!){4}(?P<role>[^!]*)!([^!]*!){23}(?P<vers>[^!]*)" | dedup host | search role=std-dhcp | timechart span=1d count by host     This is one output for a location. This is an output for another location. I have implemented a dropdown menu that selects the location based on the hosts prefix. I am looking to have all of the CLTW workstations to be summed up as 1 count and so forth for the other locations.
Hi, I am doing rex on a field that looks like this (showing multiple events below) a#1|b#30|c#6|d#9 b#5|d#7|e#5|f#4 a#6|c#4|e#9 My rex is       rex field=raw max_match=0 "((?<servic... See more...
Hi, I am doing rex on a field that looks like this (showing multiple events below) a#1|b#30|c#6|d#9 b#5|d#7|e#5|f#4 a#6|c#4|e#9 My rex is       rex field=raw max_match=0 "((?<service>[^#]*)#(?<totalRows>[^\|]*)\|?)        Resulting into service totalRows a b c d 1 30 6 9 b d e f 5 7 5 4 a c e 6 4 9   How can I create a sum of all totalRows for each service ? Basically looking for something that will output like below service totalRows a 7 b 35 c 10 d 16 e 14 f 4  
Hello! Issue: Citrix servers keep generating new GUID on reboot. I followed these steps from Splunk docs for installing Splunk on imaging software.  https://docs.splunk.com/Documentation/Splunk... See more...
Hello! Issue: Citrix servers keep generating new GUID on reboot. I followed these steps from Splunk docs for installing Splunk on imaging software.  https://docs.splunk.com/Documentation/Splunk/8.2.4/Admin/Integrateauniversalforwarderontoasystemimage   On the Imaging Server 1. msiexec /i <executable> GENRANDOMPASSWORD=1 AGREETOLICENSE=yes LAUNCHSPLUNK=0 /quiet 2. Copied over deployment server app 3. Copied over splunk.secret 4. Removed sslPassword from server.conf. (Even not starting splunk it would still generate a password in server.conf and splunk wouldn't start when an old password is set with the new splunk.secret) 5. splunk.exe clone-pre-clear-config   On the new system 1. A task was created to restart splunk.   Outcome 1. new system checks in with correct hostname, ip and guid 2. on reboots system keeps generating new guid in deployment server
Hi, Need a search for the below usecase  Search for alert_type=ufa and alert_name="  suspicious  Downloads" Please include all  domains present in the domains.csv  from  this search  We are ... See more...
Hi, Need a search for the below usecase  Search for alert_type=ufa and alert_name="  suspicious  Downloads" Please include all  domains present in the domains.csv  from  this search  We are looking for users that trigger the one above AND this one: Search for alert_type=ufa and alert_name=" suspicious  uploads" Please exclude all domains present in the domains.csv  from this search Thanks... Thanks...
Hello everyone, I am currently working with the "row table expansion.js" from the dashboard examples. However, I am facing an issue where I can only achieve a single-level row expansion. My require... See more...
Hello everyone, I am currently working with the "row table expansion.js" from the dashboard examples. However, I am facing an issue where I can only achieve a single-level row expansion. My requirement is to have a two-level row expansion instead. Could you please assist me with achieving the desired two-level row expansion functionality? Thank you in advance.  
Hi, I need a  help in enhancing the below search  if users triggers one or more of these policies: Index=dlp sourcetype=dlpalerts policy=" All Apps " OR policy=" Gmail" OR policy=" GDrive" AN... See more...
Hi, I need a  help in enhancing the below search  if users triggers one or more of these policies: Index=dlp sourcetype=dlpalerts policy=" All Apps " OR policy=" Gmail" OR policy=" GDrive" AND   ```one or more of these policies``` policy="All Policies" AND activity=upload ``` exclude all  instances  present in the lookup table instance.csv from this search ``` OR policy="All Apps - Password Protected Files - Alert"  ```exclude all  instances  present in instance.csv from this search ``` OR alert_type=ews and alert_name=" uploads" ``` exclude all  instance_id  present in instance.csv from this search ``` |Stats earliest (_time) as incident_time,values(activity)as activity,values(instance_id)as instance_id ,values(alert_type) as alert_type,values(alert_name)as alert_name By user,policy Thanks    
Hi all, I want to ask if it's even possible to somehow alternate the values in stacked bar chart, that one week the field 1 will be down and field 2 up and second week the opposite. This is curre... See more...
Hi all, I want to ask if it's even possible to somehow alternate the values in stacked bar chart, that one week the field 1 will be down and field 2 up and second week the opposite. This is current state and next week should be in reverse. Is that even possible with bar chart?  
We have logs from multiple region, but only want to report those between respective regions working hours. Created following query which works fine when putting an absolute number, but doesn't filte... See more...
We have logs from multiple region, but only want to report those between respective regions working hours. Created following query which works fine when putting an absolute number, but doesn't filter by variables.       index=ovpm sourcetype=ovpm_global | search "Service Name" = "WSB EXPRESS" | eval region = case(substr(SYSTEMNAME, 1, 2) == "my", "AP", substr(SYSTEMNAME, 1, 2) == "cz", "EU", substr(SYSTEMNAME, 1, 2) == "us", "AM", true(), "Other") | eval regionStartHour = tonumber(case(substr(SYSTEMNAME, 1, 2) == "my", 0, substr(SYSTEMNAME, 1, 2) == "cz", 8, substr(SYSTEMNAME, 1, 2) == "us", 16, true(), 0)) | eval regionEndHour = tonumber(case(substr(SYSTEMNAME, 1, 2) == "my", 8, substr(SYSTEMNAME, 1, 2) == "cz", 16, substr(SYSTEMNAME, 1, 2) == "us", 24, true(), 0)) | eval hr = strftime(_time, "%H") | search hr>=regionStartHour AND hr<=regionEndHour        
The companies Splunk Enterprise was recently updated to 9.0.3, but this is still showing the vulnerability CVE-2021-32036 due to 9.0.3 installing MongoDB 4.2.17. At least this is the assumption. Is t... See more...
The companies Splunk Enterprise was recently updated to 9.0.3, but this is still showing the vulnerability CVE-2021-32036 due to 9.0.3 installing MongoDB 4.2.17. At least this is the assumption. Is there any documentation anywhere that can confirm what version of MongoDB is packaged and installed with Splunk Enterprise 9.0.3 and above? What version of Splunk Enterprise would be needed to mitigate this issue? (MongoDB must be version 4.2.18 or above)
I created a standalone splunk container on openshift container platform with the help of "splunk operator for kubernetes". unfortunately I can't run splunk queries. The following error occures: job... See more...
I created a standalone splunk container on openshift container platform with the help of "splunk operator for kubernetes". unfortunately I can't run splunk queries. The following error occures: job terminated unexpectedly. The search job has failed due to an error. You may be able to view the job in the job inspector. When I enter the the job inspector and search for errors with ctrl+f I find only something like: "Error ScopedAliveProcessToken, unable to create FIFO, path="/opt/splunk/var/run/splunk/dispatch/1684409027.39/alive.token", permission denied pleaseeeee help me I am desperate
HI team we are getting SSH Weak Key Exchange Algorithms Enabled vulnerabilities on Splunk UF. we are getting this summary from UF . The remote SSH server is configured to allow weak key exchange al... See more...
HI team we are getting SSH Weak Key Exchange Algorithms Enabled vulnerabilities on Splunk UF. we are getting this summary from UF . The remote SSH server is configured to allow weak key exchange algorithms.   please help me out to solve this vulnerabilities
Hi, I run Splunk 9.0.3 with IT Essentials 4.15.0 with Exchange content pack 1.5.1 (DA-ITSI-CP-microsoft-exchange). We have an Exchange 2016 deployment on-premises.  Reviewing the built-in dashboard... See more...
Hi, I run Splunk 9.0.3 with IT Essentials 4.15.0 with Exchange content pack 1.5.1 (DA-ITSI-CP-microsoft-exchange). We have an Exchange 2016 deployment on-premises.  Reviewing the built-in dashboards,  I saw empty panels in some dashboards. An example is the panel "Outbound Message Volume" in the "Outbound Messages - Microsoft Exchange" dashboard. (see attachment)  I dug into the query and replaced all macros, the resulting query was: eventtype=smtp-outbound | join message_id [ search eventtype=storedriver-receive | fields message_id,sender] | eval sender=lower(sender) | eval sender_domain=lower(sender_domain) | eval sender_username=lower(sender_username) | eval recipients=lower(recipients)|eval recipient=lower(recipient)|eval recipient_domain=lower(recipient_domain)|eval recipient_username=lower(recipient_username) | table _time,message_id,sc_ip,sender,recipient_count,recipients,total_bytes | eval total_kb=total_bytes/1024 | timechart fixedrange=t bins=120 per_second(total_kb) as "Bandwidth" the chart is created based on the value of total_kb which is calculated based on the extracted field total_bytes. I removed the last command (timechart) and total_bytes does not exist, so total_kb is not calculated. I tried to find the issue and the eventtype corresponds to the sourcetype MSExchange:2013:MessageTracking  . I looked into the props.conf in the path <drive>:\Program Files\Splunk\etc\apps\DA-ITSI-CP-microsoft-exchange\default  and there are no evals created for the total_bytes field.  [MSExchange:2013:MessageTracking] SHOULD_LINEMERGE = false CHECK_FOR_HEADER = false REPORT-fields = msexchange2013msgtrack-fields,msgtrack-extract-psender,msgtrack-psender,msgtrack-sender,msgtrack-recipients,msgtrack-recipient TRANSFORMS-comments = ignore_comments FIELDALIAS-server_hostname_as_dest = server_hostname AS dest FIELDALIAS-host_as_dvc = host AS dvc EVAL-src=coalesce(original_client_ip,cs_ip) EVAL-product = "Exchange" EVAL-vendor = "Microsoft" LOOKUP-event_id_to_action = event_id_to_action_lookup event_id OUTPUT action FIELDALIAS-user = sender_username AS user FIELDALIAS-orig_dest = ss_ip AS orig_dest FIELDALIAS-dest_ip = ss_ip AS dest_ip FIELDALIAS-return_addr = return_path AS return_addr FIELDALIAS-size = message_size AS size FIELDALIAS-subject = message_subject AS subject EVAL-orig_src=coalesce(original_client_ip,cs_ip) EVAL-protocol = "SMTP" EVAL-vendor_product = "Microsoft Exchange" EVAL-sender = coalesce(PurportedSender,sender) EVAL-src_user = coalesce(PurportedSender,sender) EVAL-sender_username = coalesce(psender_username,sender_username) EVAL-sender_domain = coalesce(psender_domain,sender_domain) I also checked msexchange2013msgtrack-fields entry in the transforms.conf and the field "total_bytes" appears there. [msexchange2013msgtrack-fields] FIELDS = "date_time","cs_ip","client_hostname","ss_ip","server_hostname","source_context","connector_id","source_id","event_id","internal_message_id","message_id","network_message_id","recipients","recipient_status","total_bytes","recipient_count","related_recipient_address","reference","message_subject","sender","return_path","message_info","directionality","tenant_id","original_client_ip","original_server_ip","custom_data" DELIMS = , As a final check, I look for the Exchange logs and the total_bytes field is included in the logs.  In the extract below the total_bytes appears in the correct position with a value of 55115. #Software: Microsoft Exchange Server #Version: 15.01.2507.021 #Log-type: Message Tracking Log #Date: 2023-05-18T09:00:00.691Z #Fields: date-time,client-ip,client-hostname,server-ip,server-hostname,source-context,connector-id,source,event-id,internal-message-id,message-id,network-message-id,recipient-address,recipient-status,total-bytes,recipient-count,related-recipient-address,reference,message-subject,sender-address,return-path,message-info,directionality,tenant-id,original-client-ip,original-server-ip,custom-data,transport-traffic-type,log-id,schema-version 2023-05-18T09:00:00.691Z,,HOST.xxx.y.z,,HOST,08DB13554C308059;2023-05-18T09:00:00.637Z;ClientSubmitTime:2023-05-18T09:00:00.117Z,,STOREDRIVER,DELIVER,140492675219457,<b133b1b22b184c049dacea930775bae5@xxx.yyy.z>,bfb98eea-8ce0-42a6-a016-08db577e42ed,mary@xx.y,,55115,1,,,RE: ArcGISDataDevTraining,john@xx.y,john@xx.y,2023-05-18T09:00:00.120Z;SRV=XXX.yy.z:TOTAL-SUB=0.234|SA=0.194|MTSS-PEN=0.041(MTSSD-PEN=0.037(MTSSDA=0.002|MTSSDC=0.008|SDSSO-PEN=0.012 (SMSC=0.008(X-SMSDR=0.001)|MTSSDM-PEN=0.004)));SRV=XXX.yyy.zz:TOTAL-HUB=270.010|SMR=0.145(SMRDI=0.006|SMRC=0.138(SMRCL=0.107|X-SMRCR=0.138))|CAT=0.124(CATORES=0.016 (CATRS=0.016(CATRS-Transport Rule Agent=0.004(X-ETREX=0.004)|CATRS-Index Routing Agent=0.011 ))|CATORT=0.104(CATRT=0.104(CATRT-Journal Agent=0.104)))|QDM=0.010;SRV=ATLHQMPHSMX1.eusc.europa.eu:TOTAL-DEL=0.060|SMR=0.006(SMRDI=0.005)|SDD=0.053(SDDSPCR=0.003(SDDCC=0.003)|SDDSPCS=0.002(SDDOS=0.002)|SDDPM=0.019(SDDPM-Conversations Processing Agent=0.012|SDDPM-Mailbox Rules Agent=0.004)|SDDSCMG=0.007(SDDCMM=0.002)|SDDCM=0.001|SDDSDMG=0.017(SDDR=0.017)|X-SDDS=0.011),Originating,,192.168.X.X,192.168.X.X,"S:IncludeInSla=True;S:MailboxDatabaseGuid=d3cbc250-34d2-4a36-8f6e-dab3d1248894;S:Mailboxes=ce6cae16-5bd9-4b7d-a1c4-9ae851224466;S:StoreObjectIds=AAAAAFjvGJqmWmRHocS0e5d51KAHAHn/uaFmuTFBgJ5aRTROcxAABSlL0VcAANwUEnI+lypImRLmR1/oEQoAA2WHe9sAAA==;S:FromEntity=Hosted;S:ToEntity=Hosted;S:P2RecipStat=0,003/1;S:MsgRecipCount=1;S:SubRecipCount=1;S:DeliveryLatency=0.571;S:AttachCount=1;S:E2ELatency=0.572;S:DeliveryPriority=Normal;S:AccountForest=xxx.yyyy.x",Email,a0cd35de-8a46-490d-ec84-08db577e4322,15.01.2507.021 what could be the reason why it does not get parsed correctly? Cheers  
Hi,  Please let us know how to address the issue of Splunk Add-on for New Relic not working properly. https://apps.splunk.com/app/3465/ ・Splunk Enterprise: 8.2.9 ・Add-on for New Relic: 2.2.6 I i... See more...
Hi,  Please let us know how to address the issue of Splunk Add-on for New Relic not working properly. https://apps.splunk.com/app/3465/ ・Splunk Enterprise: 8.2.9 ・Add-on for New Relic: 2.2.6 I installed the add-on to Splunk and selected Create New Input->New Relic Account Input, I performed the account integration operation, but the staratus is false and data collection does not start. I checked the Splunkd.log and the following error is output at each polling interval.     05-18-2023 16:48:42.808 +0900 ERROR ExecProcessor [1472 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/new_relic_account_input.py" /opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.8) or chardet (3.0.4) doesn't match a supported version! 05-18-2023 16:48:42.808 +0900 ERROR ExecProcessor [1472 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/new_relic_account_input.py" RequestsDependencyWarning)     By the way, if you downgrade the version of Add-on for New Relic to 2.2.0 and perform the account integration operation again, Some data can be integrated, but some data cannot be integrated due to the following error.     05-18-2023 16:53:24.254 +0900 ERROR ExecProcessor [294378 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/new_relic_account_input.py" json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) 05-18-2023 16:53:24.280 +0900 ERROR ExecProcessor [294378 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/new_relic_account_input.py" ERRORExpecting value: line 1 column 1 (char 0)    
hi Splunk'ers, I am facing an error popup while loading my Custom made dashboard which states below: "A custom JavaScript error caused an issue loading your dashboard. See the developer console fo... See more...
hi Splunk'ers, I am facing an error popup while loading my Custom made dashboard which states below: "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details." I am using Javascript (.js scripts) for some functionalities in my dashboard. I even tried adding version = "1.1" in the form but it didn't help. Any suggestions will be really helpful. Console error:  dashboard_1.1.js:1317 Error: Script error for: theme_utils http://requirejs.org/docs/errors.html#scripterror at makeError (eval at module.exports  at HTMLScriptElement.onScriptError (eval at module.exports..............) Thanks
I have set up my intel download however when i run `http_intel` multiple IOC/values are grouped into a single row.  How do i make it line by line with each unique value?  
Hello Splunk Community, I'm currently working on creating a search using the tstats command to identify user behavior related to multiple failed login attempts followed by a successful login. I wan... See more...
Hello Splunk Community, I'm currently working on creating a search using the tstats command to identify user behavior related to multiple failed login attempts followed by a successful login. I want to use tstats for this due to its efficiency with high volumes of data, compared to the transaction command. In my case, I want to be able to detect an event sequence where a user has had, let's say, 10 or more failed login attempts, followed by a successful login attempt, within a specified time window (for example, within an hour). I understand that tstats doesn't provide the same level of detail as transaction for creating sequences of events. However, I'm looking for suggestions on how to use tstats, combined with other SPL commands, to achieve a similar result. Here's an example of the type of data I'm dealing with: _time user status 1622890560 user1 failure 1622890620 user1 failure 1622890680 user1 success In this example, the status field contains "success" or "failure", and the user field contains the user ID. Any guidance or suggestions would be greatly appreciated. Thanks in advance for your help!
Hi Team, Am using below query and wanted to create table out of raw data  splunk query - index=* ("Exception occurred during ORC" OR ("Response received, system status NOK") AND NOT "n. a." sourc... See more...
Hi Team, Am using below query and wanted to create table out of raw data  splunk query - index=* ("Exception occurred during ORC" OR ("Response received, system status NOK") AND NOT "n. a." sourcetype="kube:container:*slimits*-service") namespace IN ("dk1692-e","dk1399-c","dk1371-c","dk1398-c","dk1400-d") Sample out put  1 2023-05-18 05:16:48,083 INFO [com.db.gtb.bankingapi.slimits.orc.service.internal.OrcServiceImpl] (task-pool-2) - [e2eCallReference: ] Response received, system status NOK for FCS   2 2023-05-14 22:32:18,020 ERROR [com.db.gtb.bankingapi.slimits.orc.scheduled.OrcScheduler] (task-pool-3) - [e2eCallReference: ] Exception occurred during ORC, due to Failed to obtain JDBC Connection; nested exception is java.sql.SQLTransientConnectionException: ems-pool - Connection is not available, request timed out after 5000ms..   3 2023-05-13 05:06:05.808 [INFO] [scheduling-1] orcCheck(OrcServiceImpl.java:60) - Response received, system status NOK for ROUTER   2023-05-13 05:06:13,067 INFO [com.db.gtb.bankingapi.slimits.orc.service.internal.OrcServiceImpl] (task-pool-2) - [e2eCallReference: ] Response received, system status NOK for EMS   --------------------------------------------------------------------------------------------------------------------------- Expected output  Date                                      time                            Status  2023-05-13                      05:06                      system status NOK for EMS 2023-05-14                      22:32                       Exception occurred during ORC