All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am using Splunk trail version and recently received the message "The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch.". I checked in splunk community and Splunk doc... See more...
I am using Splunk trail version and recently received the message "The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch.". I checked in splunk community and Splunk documents and as per the solution suggested i changed the free disc space to 20000 mb through splunk web. But still receiving the same message. And because of this error the indexing has been stopped and dashboard display no data. Also made the changes in the main and the audit index by reducing the TSIDX file older than 30 days but still receiving the same message. Can anyone please help me finding the solution as I am new in Splunk and not very proficient in Admin part. Do I need to increase the Instance size, currently I am using T3 Large(120 GB  ). requesting every Splunk professional to suggest a solution
This is reference to the SolarWinds Add-on for Splunk - SolarWinds Add-on for Splunk | Splunkbase @ehaddad_splunk   Is there a limit to the structure or size of the SWQL?   I can get Alerts, invent... See more...
This is reference to the SolarWinds Add-on for Splunk - SolarWinds Add-on for Splunk | Splunkbase @ehaddad_splunk   Is there a limit to the structure or size of the SWQL?   I can get Alerts, inventory, and simple queries no issue.  But when I run a complex search, I get status=400 The search below works inside the SWQL Studio SELECT IPAddress1, IPAddress2, CASE WHEN IPAddress1 IS NULL THEN NULL ELSE H1.Hostname END AS Hostname1, CASE WHEN IPAddress2 IS NULL THEN NULL ELSE H2.Hostname END AS Hostname2, TotalBytesIngress, TotalPacketsIngress, TotalBytesEgress, TotalPacketsEgress, TotalBytesIngress + TotalBytesEgress AS TotalBytes, TotalPacketsIngress + TotalPacketsEgress AS TotalPackets FROM (SELECT TOP 10 SourceIP AS IPAddress1, DestinationIP AS IPAddress2, MAX(SourceHostnameID) AS HostnameID1, MAX(DestinationHostnameID) AS HostnameID2,SUM(IngressBytes) AS TotalBytesIngress, SUM(IngressPackets) AS TotalPacketsIngress, SUM(EgressBytes) AS TotalBytesEgress, SUM(EgressPackets) AS TotalPacketsEgress, SUM(IngressBytes) + SUM(EgressBytes) AS TotalBytes, SUM(IngressPackets) + SUM(EgressPackets) AS TotalPackets FROM Orion.Netflow.FlowsByConversation Flows WHERE (Timestamp >= (GetUTCDate() - 0.04167)) GROUP BY (SourceIP, DestinationIP) ORDER BY TotalBytes DESC) OuterFlows LEFT JOIN Orion.Netflow.Hostnames AS H1 ON H1.ID = OuterFlows.HostnameID1 LEFT JOIN Orion.Netflow.Hostnames AS H2 ON H2.ID = OuterFlows.HostnameID2 ORDER BY TotalBytes DESC, IPAddress1 ASC, IPAddress2 ASC   The error message returns is:    2021-03-18 11:43:11,872 +0000 log_level=ERROR, pid=30166, tid=Thread-4, file=engine.py, func_name=_send_request, code_line_no=325 | [stanza_name="test_001"] The response status=400 for request which url=https://10.1.2.21:17778/SolarWinds/InformationService/v3/Json/Query?query=SELECT IPAddress1, IPAddress2, CASE WHEN IPAddress1 IS NULL THEN NULL ELSE H1.Hostname END AS Hostname1, CASE WHEN IPAddress2 IS NULL THEN NULL ELSE H2.Hostname END AS Hostname2, TotalBytesIngress, TotalPacketsIngress, TotalBytesEgress, TotalPacketsEgress, TotalBytesIngress + TotalBytesEgress AS TotalBytes, TotalPacketsIngress + TotalPacketsEgress AS TotalPackets FROM (SELECT TOP 10 SourceIP AS IPAddress1, DestinationIP AS IPAddress2, MAX(SourceHostnameID) AS HostnameID1, MAX(DestinationHostnameID) AS HostnameID2,SUM(IngressBytes) AS TotalBytesIngress, SUM(IngressPackets) AS TotalPacketsIngress, SUM(EgressBytes) AS TotalBytesEgress, SUM(EgressPackets) AS TotalPacketsEgress, SUM(IngressBytes) + SUM(EgressBytes) AS TotalBytes, SUM(IngressPackets) + SUM(EgressPackets) AS TotalPackets FROM Orion.Netflow.FlowsByConversation Flows WHERE (Timestamp >= (GetUTCDate() - 0.04167)) GROUP BY (SourceIP, DestinationIP) ORDER BY TotalBytes DESC) OuterFlows LEFT JOIN Orion.Netflow.Hostnames AS H1 ON H1.ID = OuterFlows.HostnameID1 LEFT JOIN Orion.Netflow.Hostnames AS H2 ON H2.ID = OuterFlows.HostnameID2 ORDER BY TotalBytes DESC, IPAddress1 ASC, IPAddress2 ASC and method=GET. However, this one works fine as a successful search:  2021-03-18 11:55:37,398 +0000 log_level=INFO, pid=12462, tid=Thread-4, file=http.py, func_name=request, code_line_no=169 | [stanza_name="test_002"] Invoking request to [https://10.1.2.1:17778/SolarWinds/InformationService/v3/Json/Query?query=SELECT%20Caption%20AS%20NodeName,%20IPAddress%20FROM%20Orion.Nodes] finished
First time installing Splunk. I tried to reinstall the Splunk and web server is still not starting. I also need to change the mgmt port number as the previous one is still using the default port and ... See more...
First time installing Splunk. I tried to reinstall the Splunk and web server is still not starting. I also need to change the mgmt port number as the previous one is still using the default port and I have no idea how to disable the previous session.    ./splunk start   Splunk> Winning the War on Error   Checking prerequisites... Checking mgmt port [8111]: open Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/Applications/splunkforwarder/splunkforwarder-8.1.2-545206cc9f70-darwin-64-manifest' All installed files intact. Done All preliminary checks passed.   Starting splunk server daemon (splunkd)...   Done   bin % ./splunk cmd btool web list --debug | grep startwebserver /Applications/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/web.conf startwebserver = 0
The other day I noticed a subset of workstation based deployment clients had an app installed that was meant only for servers. It turns out the workstations received the server oriented app because ... See more...
The other day I noticed a subset of workstation based deployment clients had an app installed that was meant only for servers. It turns out the workstations received the server oriented app because of a whitelist entry match on the InstanceID (guid) attribute.  The whitelist pattern was constructed with matching on clientName or hostname attributes in mind.  I was able to work around the problem by making the whitelist entry regular expression less ambiguous. This got me wondering whether there is a way to control which attribute matching is conducted on for a given whitelist entry.   I see  for CSV based whitelist entries there are all manner of new features which enable specification of the attribute on which  matching occurs (field name).  Is the same sort of control possible through non CSV based whitelist entries?  
Hello "Good Day" I wish to change the dashboard label name dynamically by using tokens.How can we acheive it please help me out with this  Example <form> <label>R COVERAGE &D2-R COVERAGE</label>... See more...
Hello "Good Day" I wish to change the dashboard label name dynamically by using tokens.How can we acheive it please help me out with this  Example <form> <label>R COVERAGE &D2-R COVERAGE</label> i want to pass token results in the label so when result=TC_result  label should give me R  Coverage Reports if  result=TC_D2_results label should change to D2 R covergae Reports   Thank You
Hello, I would like to perform the insert/update on my DB table out of the DB Connect. The corresponding db_outputs.conf looks as follow:   [Z_USERS] connection = HANA_S4_FRUN_FH1 customized_mapp... See more...
Hello, I would like to perform the insert/update on my DB table out of the DB Connect. The corresponding db_outputs.conf looks as follow:   [Z_USERS] connection = HANA_S4_FRUN_FH1 customized_mappings = id:id:16,name:name:32,username:username:32,email:email:32 ,address.street:address_street:32, address.suite:address_suite:32, address.city:address_city:32, address.zipcode:address_zipcode:32 , address.geo.lat:address_geo_lat:32, address.geo.lng:address_geo_lng:32, phone:phone:32, website:website:32 , company.name:company_name:32, company.catchphrase:company_catchphrase:64, company.bs:company_bs:64 disabled = 0 is_saved_search = 0 scheduled = 0 table_name = Z_USERS ui_query_catalog = SAPHANADB ui_query_schema = FRX_READ_USER ui_query_table = Z_USERS using_upsert = 1 unique_key = id,name,username #interval = 40 * * * *     So my DB primary key is a combination of the id+name+username. Unfortunately it wont work. When I define the above unique_key to be only one field and of course create the underlying DB table correspondingly, it will be fine, so the issue is not laying anywhere else, it has to do definitely with the composite primary key. How would I overcome this? Is it possible? I mean one of the ideas would be to concatenate the fields of the key into an additional key, but this means changing the DB structure (additional column) and this is not always possible. So ideally the solution should be that the DB Connect is able to handle this, especially that this is not so uncommon case to have compound keys I think. Kind Regards, Kamil
Hi There! I have a usecase wherein I need to import the complete database, like one particular table of ALL the Schemas of ALL the catalogs available in a connection in one input. I do have acces... See more...
Hi There! I have a usecase wherein I need to import the complete database, like one particular table of ALL the Schemas of ALL the catalogs available in a connection in one input. I do have access to the Database as well Do we have a way around for the same. Thanks in advance for your help!
Hi, I have created a KVstore _key value should be avc_id field  In my case the key value is auto created, how to correct it?
I have inserted the same data in splunk and mysql. Splunk query:   index=sysmon EventCode=3 | stats count as sysmon_count by img_name | sort sysmon_count desc | join type=inner img_name [ sear... See more...
I have inserted the same data in splunk and mysql. Splunk query:   index=sysmon EventCode=3 | stats count as sysmon_count by img_name | sort sysmon_count desc | join type=inner img_name [ search index=winevent EventCode=5156 | stats count as winevt_count by img_name ] | table img_name, sysmon_count, winevt_count   Result: img_name sysmon_count winevt_count splunkd.exe 3697 3701 python3.exe 614 3071 streamfwd.exe 614 1228 chrome.exe 211 910 svchost.exe 36 97 System 22 34 taskhostw.exe 1 2 whale_update.exe 1 1   Mysql query:   select a.img_name, a.sysmon_count, b.winevt_count from ( select img_name, count(*) as sysmon_count from sysmon where eventcode = 3 group by img_name order by sysmon_count desc) a join ( select img_name, count(*) as winevt_count from winevent where eventcode=5156 group by img_name order by winevt_count desc) b on a.img_name = b.img_name   Result: img_name sysmon_count winevt_count splunkd.exe 3697 3701 python3.exe 614 3071 streamfwd.exe 614 1228 chrome.exe 211 910 svchost.exe 36 97 System 22 34 RuntimeBroker.exe 2 2 taskhostw.exe 1 2 backgroundTaskHost.exe 1 1 OfficeClickToRun.exe 1 1 POWERPNT.EXT 1 1 MsMpEng.exe 1 1 CEIP.exe 1 1 whale_update.exe 1 1   Add: I have found the cause of join problem. Mysql is case-insensitive but splunk is case-sensitive. It can get the same result as mysql when change the join field(img_name) to lowercase.  index=sysmon EventCode=3 | eval img_name = lower(img_name) | stats count as sysmon_count by img_name | sort sysmon_count desc | join type=inner img_name [ search index=winevent EventCode=5156 | eval img_name = lower(img_name) | stats count as winevt_count by img_name ]  
Hi, I have a scheduled search where summary indexing is enabled I also have a summary index created. The output of the scheduled search is not send to summary index. summar index = "test_summary"... See more...
Hi, I have a scheduled search where summary indexing is enabled I also have a summary index created. The output of the scheduled search is not send to summary index. summar index = "test_summary" Scheduled search name -  test_summary_report  summary indexing is enabled Cron schedule is set to run every minute.   What would be the issue?    
This below query gives me the earliest trigger_name according to the splunk log timestamps. But I have a custom timestamp field called TIMESTAMP_DERIVED which does not match the _time and I want to d... See more...
This below query gives me the earliest trigger_name according to the splunk log timestamps. But I have a custom timestamp field called TIMESTAMP_DERIVED which does not match the _time and I want to do my earliest calculation based on that field. Is this possible? eventtype=sfdc-event-log | stats earliest(TRIGGER_NAME), earliest(TRIGGER_TYPE) by REQUEST_ID   
Hi Splunkers, I have gotten help on this type of problem and it has been very useful. However, I still stuck, but almost there, need some guidance.   Scenario: Ingestion_Time_Logged  which is th... See more...
Hi Splunkers, I have gotten help on this type of problem and it has been very useful. However, I still stuck, but almost there, need some guidance.   Scenario: Ingestion_Time_Logged  which is the field I created should occur twice within 30 min, at min 7th and then min 37th.  If event occurs at 6:00  Ingestion_Time_Logged should be 6:07 and if event occurs at 6:30 Ingestion_Time_Logged should be 6:37. The min should always land on the next exact 7th min or the next exact  37th. min.  This is what I have, there is an issue when min is before the 7th min and when min is shy from the 37 th min.  I am  open to any suggestions, perhaps I need a new approach here.      (index=foo Type="black") OR (index="boo") | eval CreationTime=case(Type="creation", loggedEventTime) | eval CreationTime_epoch=strptime(CreationTime, "%Y-%m-%d %H:%M:%S.%6N") | eval latestCreated_hour=tonumber(strftime(CreationTime_epoch, "%H")) | eval latestCreated_min=tonumber(strftime(CreationTime_epoch, "%M")) | eval latestCreated_sec=round(CreationTime_epoch%60,6) | eval Ingestion_Time_Logged=strftime(case(latestCreated_hour=23 OR latestCreated_min>07,CreationTime_epoch-CreationTime_epoch_epoch%1800+2220+latestCreated_sec,CreationTime_epoch=0,CreationTime_epoch+420,1=1,CreationTime_epoch),"%Y-%m-%d %H:%M:%S.%6N")    
I'm getting this error when I run a report: External command based lookup 'x' is not available because KV Store initialization has failed. Contact your system administrator. Additionally I updated ... See more...
I'm getting this error when I run a report: External command based lookup 'x' is not available because KV Store initialization has failed. Contact your system administrator. Additionally I updated the server.pem expiration date and still see this error.   
When using the sensorsearch command included as part of the VMware Carbon Black EDR On-Prem App I get a Python ValueError and only a small number or no results (depending on the query). For example,... See more...
When using the sensorsearch command included as part of the VMware Carbon Black EDR On-Prem App I get a Python ValueError and only a small number or no results (depending on the query). For example, the following query for all sensor information:   | sensorsearch   Which should return details of all sensors, instead returns details on between 5-20 sensors and the following stack trace: Error: error searching for None in Cb Response: invalid literal for int() with base 10: '' stacktrace: Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\DA-ESS-CbResponse\bin\cbhelpers.py", line 120, in transform yield self.generate_result(result) File "C:\Program Files\Splunk\etc\apps\DA-ESS-CbResponse\bin\sensor_search.py", line 63, in generate_result result = super(SensorSearchCommand, self).generate_result(data) File "C:\Program Files\Splunk\etc\apps\DA-ESS-CbResponse\bin\cbhelpers.py", line 103, in generate_result rawdata = dict((field_name, getattr(data, field_name, "")) for field_name in self.field_names) File "C:\Program Files\Splunk\etc\apps\DA-ESS-CbResponse\bin\cbhelpers.py", line 103, in <genexpr> rawdata = dict((field_name, getattr(data, field_name, "")) for field_name in self.field_names) File "C:\Program Files\Splunk\etc\apps\DA-ESS-CbResponse\bin\cbapi\models.py", line 101, in __get__ return coerce_type(value) ValueError: invalid literal for int() with base 10: '' Testing the API directly via curl using the same API key returns the expected results. The app is installed on a search head running Splunk v7.2.5.1 on Windows Server 2016. Version information: Splunk: v7.2.5.1 on Windows Server 2016 VMware Carbon Black EDR On-Prem App: 2.1.4 Carbon Black Response/EDR on prem server version: 7.4.1 Any help greatly appreciated.  
Good Evening, I have, what appears to be, a unique situation.  I have tried every means that I could find even vaguely related to my problem. The Scenario Data, which each record having it's own e... See more...
Good Evening, I have, what appears to be, a unique situation.  I have tried every means that I could find even vaguely related to my problem. The Scenario Data, which each record having it's own epoch-based timestamp, is being imported into Splunk weekly.  As a result, indexed timestamps are nowhere near the actual record timestamp. My dashboard has two text boxes in which the user can input a date range (with formatting guidance) for the records' timestamps which fall between those dates. The Problem  No matter how I try to format string inputs, I cannot retrieve the records within those dates.  What's worse is, when I include my WHERE statement, I don't get ANY records returned.  I have been working on this for hours, but I am no closer now than when I began. The Code  My input tokens for the text boxes are "date_start" and "date_stop".  The field "eventTime" is the record's timestamp in epoch time. <query>index=customer sourcetype=json_no_timestamp custApiKey=d8lwmc9qjd778ksmfy | eval _start=strptime($date_start$, "%Y-%m-%d") | eval _start=strftime(_start, "%s") | eval _stop=strptime($date_stop$, "%Y-%m-%d") | eval _stop=strftime(_stop, "%s") | where (_start &gt;= eventTime) AND (_stop &lt; eventTime) </query>   Any help would be GREATLY appreciated!
I'm not sure how to even troubleshoot this. A few weeks ago, we started a dropoff in events into splunk.   We are sending Azure SQL Server audit logs via event hub picked up by Azure Add-on for Splu... See more...
I'm not sure how to even troubleshoot this. A few weeks ago, we started a dropoff in events into splunk.   We are sending Azure SQL Server audit logs via event hub picked up by Azure Add-on for Splunk.   our traffic has NOT changed.   Our HF has not changed. I can't see my activity anymore (a month ago i saw everything I did).   Now, i have no visibility to my traffic.   I am seeing traffic from web servers and some other users, but not sure i trust it now.   There has been a drop off in events. What can I do to troubleshoot what is going on here?  I can turn on verbose logging, but since i can't throttle or specify what is getting logged (server log, not db log), it would be 000s of messages in a very heavily used database.  
How to search for broken Splunk forwarders or Indexers without using a .conf file
Hi - When viewing the malware_tracker kv store in lookup editor v3.4,6 on Splunk enterprise  Version: 8.0.6 Build: 152fb4b2bb96 the dates in the firstTime and lastTime fields for all entries appear... See more...
Hi - When viewing the malware_tracker kv store in lookup editor v3.4,6 on Splunk enterprise  Version: 8.0.6 Build: 152fb4b2bb96 the dates in the firstTime and lastTime fields for all entries appears as 1970/01/19.    If i use inputlookup/convert commands the dates are correct. I read some previous posts regarding this same issue that indicate this is a bug in older versions, but the lookup editor version installed is 3.4.6. Thanks, Kris
Hello everyone,  I am trying to compare a list of IPs from a lookup with a output from a search field, and instated of do this,  | search ( dest_ip!=10.0.0.0/8 AND dest_ip!=172.16.0.0/12 AND dest_i... See more...
Hello everyone,  I am trying to compare a list of IPs from a lookup with a output from a search field, and instated of do this,  | search ( dest_ip!=10.0.0.0/8 AND dest_ip!=172.16.0.0/12 AND dest_ip!=192.168.0.0/16 ...) I want to have a lookup with the ips ranges and exclude from the results  the ip that matchs with the lookup. My lookup is like: ips 13.64.0.0/11 13.96.0.0/13 13.104.0.0/14 .... Really thanks in advance.  
Users' account is setup properly in Active Directory.  They have the appropriate role(s) assigned, but when attempting to login, keep getting error "Error > Unauthorized".  Other users have no issues... See more...
Users' account is setup properly in Active Directory.  They have the appropriate role(s) assigned, but when attempting to login, keep getting error "Error > Unauthorized".  Other users have no issues, but seeing two users that are set the same as other users and getting Unauthorized.