All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers, I'm trying to set an alert condition to block traffic for IP addresses from 13.108.0.0 to13.111.255.255 and from 66.231.70.0  to 66.231.85.255, but I'm really stuck is there anybody c... See more...
Hi Splunkers, I'm trying to set an alert condition to block traffic for IP addresses from 13.108.0.0 to13.111.255.255 and from 66.231.70.0  to 66.231.85.255, but I'm really stuck is there anybody can help please? My query below:   | tstats count values(All_Traffic.app) AS app values(All_Traffic.dvc) AS devicename values(All_Traffic.src_zone) AS src_zone values(All_Traffic.dest_zone) AS dest_zone from datamodel=Network_Traffic where All_Traffic.action=blocked All_Traffic.src_ip IN (*) All_Traffic.dest IN (13.108.0.0 13.111.255.255 OR 66.231.80.0 66.231.95.255) All_Traffic.dest_port IN (*) by _time,All_Traffic.action,All_Traffic.src_ip, All_Traffic.dest ,All_Traffic.dest_port ,All_Traffic.transport,All_Traffic.rule,sourcetype | rename All_Traffic.* AS * | sort - _time limit=0 | fields - count | rename rule as policy,src_ip AS src | eval action=case(action="teardown","drop",1=1,action)    
Our Splunk rep walked us through setting up SSL for our Splunk server communication with each other and for our Universal Forwarders to connect to our Indexer. However, we still get the warning X509 ... See more...
Our Splunk rep walked us through setting up SSL for our Splunk server communication with each other and for our Universal Forwarders to connect to our Indexer. However, we still get the warning X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) should not be used, as it is issued by Splunk's own default Certificate Authority (CA) In addition, Nessus scans find the default Splunk certificate on all of the systems with Universal Forwarders. We have SSL certificates created by our government agency's CA. I have verified that our indexer's server.conf is pointing sslRootCAPath to our CA's pem. I have verified that our indexer's inputs.conf is pointing serverCert at our server's pem. I have verified that our universal forwarders' outputs.conf have clientCert pointing at our server's pem, which is located on each system in C:\Program Files\SplunkUniversalForwarder\etc\auth. I have verified that our universal forwarders' outputs.conf have sslRootCAPath pointing at our CA's pem, which is located on each system in C:\Program Files\SplunkUniversalForwarder\etc\auth. Why do we still get this warning? Are we missing a setting somewhere?
HI , Getting error after upgrade Splunk version, It is custom service link app, how to fix this issue, suscpecting app is not cpmpactable with python 3.7 08-15-2022 18:26:33.763 +0400 ERROR Scrip... See more...
HI , Getting error after upgrade Splunk version, It is custom service link app, how to fix this issue, suscpecting app is not cpmpactable with python 3.7 08-15-2022 18:26:33.763 +0400 ERROR ScriptRunner [10252 TcpChannelThread] - stderr from '/data/splunk/bin/python3.7 /data/splunk/bin/runScript.py setup': The script at path=/data/splunk/etc/apps/TA-servicelink/bin/TA_servicelink_rh_settings.py has thrown an exception=Traceback (most recent call last): 08-15-2022 18:26:33.763 +0400 ERROR ScriptRunner [10252 TcpChannelThread] - stderr from '/data/splunk/bin/python3.7 /data/splunk/bin/runScript.py setup': File "/data/splunk/etc/apps/TA-servicelink/bin/ta_servicelink/splunktaucclib/rest_handler/endpoint/validator.py", line 388 08-15-2022 18:26:33.764 +0400 ERROR ScriptRunner [10252 TcpChannelThread] - stderr from '/data/splunk/bin/python3.7 /data/splunk/bin/runScript.py setup': File "/data/splunk/etc/apps/TA-servicelink/bin/ta_servicelink/splunktaucclib/rest_handler/endpoint/validator.py", line 388 08-15-2022 18:34:47.307 +0400 ERROR ScriptRunner [21184 TcpChannelThread] - stderr from '/data/splunk/bin/python3.7 /data/splunk/bin/runScript.py setup': The script at path=/data/splunk/etc/apps/TA-servicelink/bin/TA_servicelink_rh_settings.py has thrown an exception=Traceback (most recent call last): 08-15-2022 18:34:47.307 +0400 ERROR ScriptRunner [21184 TcpChannelThread] - stderr from '/data/splunk/bin/python3.7 /data/splunk/bin/runScript.py setup': File "/data/splunk/etc/apps/TA-servicelink/bin/ta_servicelink/splunktaucclib/rest_handler/endpoint/validator.py", line 388 08-15-2022 18:34:47.309 +0400 ERROR ScriptRunner [21184 TcpChannelThread] - stderr from '/data/splunk/bin/python3.7 /data/splunk/bin/runScript.py setup': File "/data/splunk/etc/apps/TA-servicelink/bin/ta_servicelink/splunktaucclib/rest_handler/endpoint/validator.py", line 388 08-15-2022 18:40:39.629 +0400 INFO sendmodalert [18314 AlertNotifierWorker-0] - Invoking modular alert action=servicelink for search="Threat - NYUAD - Exfiltration of Valuable Data - Rule" sid="scheduler__admin__SplunkEnterpriseSecuritySuite__RMD5ecd0c23d8fa296d1_at_1660574400_145_92DB7169-4DFE-4D47-AE11-FB9899BE27C5" in app="SplunkEnterpriseSecuritySuite" owner="admin" type="saved" 08-15-2022 18:40:39.683 +0400 ERROR sendmodalert [18314 AlertNotifierWorker-0] - action=servicelink STDERR - File "/data/splunk/etc/apps/TA-servicelink/bin/servicelink.py", line 57 08-15-2022 18:40:39.683 +0400 ERROR sendmodalert [18314 AlertNotifierWorker-0] - action=servicelink STDERR - results_url=self.settings.get('results_link') 08-15-2022 18:40:39.683 +0400 ERROR sendmodalert [18314 AlertNotifierWorker-0] - action=servicelink STDERR - ^ 08-15-2022 18:40:39.683 +0400 ERROR sendmodalert [18314 AlertNotifierWorker-0] - action=servicelink STDERR - TabError: inconsistent use of tabs and spaces in indentation 08-15-2022 18:40:39.686 +0400 INFO sendmodalert [18314 AlertNotifierWorker-0] - action=servicelink - Alert action script completed in duration=45 ms with exit code=1 08-15-2022 18:40:39.686 +0400 WARN sendmodalert [18314 AlertNotifierWorker-0] - action=servicelink - Alert action script returned error code=1
Hello, I have data being gather one per min.   FYI its disk usage %. Is it possible to create an SPL that output simple time from _time and UsePct every time UsePct changes. Not dedup it well... See more...
Hello, I have data being gather one per min.   FYI its disk usage %. Is it possible to create an SPL that output simple time from _time and UsePct every time UsePct changes. Not dedup it well yes but only when it (UsePct) changes.  So if on a give date / hour / min it goes up or down.  I can track the change. i.e. 2022-08-15 07:54:29 100% 2022-08-15 07:55:29 100% 2022-08-15 07:56:29 100% 2022-08-15 07:57:29 100% 2022-08-15 07:58:29 99% 2022-08-15 08:00:29 100% 2022-08-15 08:01:29 100% 2022-08-15 08:02:29 100%   For this i would see  2022-08-15 07:57:29 100% 2022-08-15 07:58:29 99% 2022-08-15 07:59:29 100% Close as i can get it this ((index=windows OR index=perfmon OR index=os*) tag=oshost tag=performance tag=storage) host=by0saq Filesystem="/dev/mapper/vgappl-_u01_app" | eval date=strftime(_time,"%x") | sort _time | table date UsePct | dedup date   Thanks.
Does a CSV import connector or a XML import connector exist in current Splunk versions?:)
H, I want to take rules on security essentials as a list.I m try to search in app but I cant get rule list.There r many content in this app. https://docs.splunksecurityessentials.com/content-detail... See more...
H, I want to take rules on security essentials as a list.I m try to search in app but I cant get rule list.There r many content in this app. https://docs.splunksecurityessentials.com/content-detail/ .I want to export this rule and colleration search as a xml. Could u help me about this search?   Thanks.
Hello, I want to extract 4 fields using regex with their respective names in bold and their respective values as per below: Hashes="SHA1=27EFA81247501EBA6603842F476C899B5DAAB8C7,MD5=49E93FA14D4E0... See more...
Hello, I want to extract 4 fields using regex with their respective names in bold and their respective values as per below: Hashes="SHA1=27EFA81247501EBA6603842F476C899B5DAAB8C7,MD5=49E93FA14D4E09AAFD418AB616AD1BB1,SHA256=35E3F44C587DE8BFF62095E768C77E12E2C522FB7EFD038FFFCC0DD2AE960A57,IMPHASH=B7A4477FA36E2E5287EE76AC4AFCB05B" The actual field name is "Hashes", I want to extract one field named SHA1 with the value "27EFA81247501EBA6603842F476C899B5DAAB8C7", one field named MD5 with the value "49E93FA14D4E09AAFD418AB616AD1BB1" etc. Thank you in advance.
I used cyberark and created 3 servers via cyberark and installed splunk this server machine 192.0.0.1 via cyberark access remote desktop connection. Currently, this machine gets an event log. If some... See more...
I used cyberark and created 3 servers via cyberark and installed splunk this server machine 192.0.0.1 via cyberark access remote desktop connection. Currently, this machine gets an event log. If someone accesses this machine, a login event is received in splunk but it doesn't show that the login event occurred from where? The windows event log has src ip, but does it mean where did the login event occur from? If the client PC logs into the Windows server directly, that is the IP address of the client PC. If the client PC logs into the Windows server through CyberArk, that is the CyberArk IP address. I want the windows event log to send IP. In the two cases above, I want to get the IP address of the Windows server. Flowchart
Hello, Our Splunk system just got an increase in size as image below (we have a Master, 1:1 indexes cluster struture) Meaning we have an increase for hot from 500GB -> 1T and cold from 1.5T ... See more...
Hello, Our Splunk system just got an increase in size as image below (we have a Master, 1:1 indexes cluster struture) Meaning we have an increase for hot from 500GB -> 1T and cold from 1.5T -> 3T I have change the stanza in splunk/etc/master-apps/_cluster/local/indexes.conf (where we put our individual indexes config like maxTotalDataSizeMB, homePath.MaxDataSizeMB, coldPath.MaxDataSizeMB) to match the newly provide disk space. But after I restart services for both our indexers and master, it won;t apply the newly assign disk space but still using old one. I suspect I miss something here. Can anyone point me to where can I config overall setting? (Because I'm not familial with splunk structure)
Hi, we would like to set allow_skew =15% globally for all of our searches, except for searches which reside in one specific app b. How do i do that?   We tried to set a global value in apps/a/d... See more...
Hi, we would like to set allow_skew =15% globally for all of our searches, except for searches which reside in one specific app b. How do i do that?   We tried to set a global value in apps/a/default/savedsearches.conf   [default] allow_skew=15%     And then a add specific configuration in app b to override the global default (apps/b/local/savedsearches.conf)   [default] allow_skew=0     But it doesn't work. btool shows, that the setting in b/local/savedsearches.conf wins over apps/a/default/savedsearches.conf.   According to Configuration file precedence - Splunk Documentation savedsearches.conf is per app/user configuration file. Adding a default.meta for app b with   [savedsearches] export=none   also didn't help.   Is there a bug or am i missing something? For reference the link to the official documentation: Offset scheduled search start times - Splunk Documentation   Thanks! - Lorenz
I have raw message of the form... 2022-08-15T10:41:54.266337+00:00 microService 9bc7520a-4f8d-4edc-a4cd-b08c0fae8992[[APP/PROC/WEB/2]] APPENDER=APP, DATE=2022-08-15 10:41:54.266, LEVEL=WARN , USER=... See more...
I have raw message of the form... 2022-08-15T10:41:54.266337+00:00 microService 9bc7520a-4f8d-4edc-a4cd-b08c0fae8992[[APP/PROC/WEB/2]] APPENDER=APP, DATE=2022-08-15 10:41:54.266, LEVEL=WARN , USER=, THREAD=[pool-25-thread-1], LOGGER=Factory, CORR=, INT_CORR=, X-VCAP-REQUEST-ID=, MESSAGE=warningMessage What's the rex syntax to return microService AND warningMessage?
Hi, Recently, we have changed the format of how our winventlog data is being fed to Splunk. From the classic format, we had it changed to XML.  I found out that there is a difference in how dat... See more...
Hi, Recently, we have changed the format of how our winventlog data is being fed to Splunk. From the classic format, we had it changed to XML.  I found out that there is a difference in how data was also extracted when it was in classic versus when it is in XML. One would be for EventCode=4719, I noticed that before, in classic, we used to have Category and Subcategory fields but switching to XML, we are getting CategoryId and SubcategoryId.  The challenge was, that we don't have any means to convert these IDs into their own meaning. I have been looking for any lookup that comes with the Splunk Add-on for Windows in order for me to map these ids but unfortunately can't find one.  I tried to check Microsoft Windows documents but to no avail cannot find how to get the values for these ids. Can anyone help how we can map the CategoryId and SubcategoryId? Thanks in advance!
Hi,   Trying to graph events from a created report and my time field either isn't being recognized, I see 2 date points and I can't use time filters. | inputlookup Reference_Server_Logins.c... See more...
Hi,   Trying to graph events from a created report and my time field either isn't being recognized, I see 2 date points and I can't use time filters. | inputlookup Reference_Server_Logins.csv | append [ search index=Data_2022_login_log type=LoginEvent | search doc.value.deltaCurrency > 0 | eval Server=mvindex(split(mvindex(split(source, "-"), 2), "/"), 0) | stats count by _time, Server | timechart span=1d count by Server] | dedup _time | sort - _time | outputlookup Reference_Server_Logins.csv this is my report search, the normal search works fine and I can graph that however once the data is added to the CSV and I try and add that to a dashboard panel the _time field isn't affected by the date selection field, the graph is showing hours instead of days, and it only shows the 2 earliest values. Messing around creating pivots allows me to see all data but again it's not affected by the filter. Any help would be great. Thanks
I'm removing ex-users from Splunk. I reassigned Knowledge Objects to new users and deleted inactive accounts Now I found as some of Datasets is still associated with Owners, which were already dele... See more...
I'm removing ex-users from Splunk. I reassigned Knowledge Objects to new users and deleted inactive accounts Now I found as some of Datasets is still associated with Owners, which were already deleted. How can I change the Owner for Datasets?   Thanks
I have created a Dashboard using the Dashboard Studio and am trying to find out how to adjust the width of an input field. By googling I was able to find instructions for how to add CSS to the Form i... See more...
I have created a Dashboard using the Dashboard Studio and am trying to find out how to adjust the width of an input field. By googling I was able to find instructions for how to add CSS to the Form in the Classic Dashboard, but have not been able to find out how to do the same using Dashboard Studio. Anyone knows how to achieve this? Also, is there any way to add a Chart, Image or similar to the Right of the Input fields?
I have some data in MySQL , and I have DB Content in Splunk. Now I want import MySQL data into Splunk assets , but I just find how import data from csv files .   I knew this documentation : Collec... See more...
I have some data in MySQL , and I have DB Content in Splunk. Now I want import MySQL data into Splunk assets , but I just find how import data from csv files .   I knew this documentation : Collect and extract asset and identity data in Splunk Enterprise Security - Splunk Documentation  , but I don't know how "Use Splunk DB Connect" for import data .   And , this page is null (v7.0.1) : Define identity formats - Splunk Documentation    PS: Sorry for my bad English.
While using the mvexpand command, i am getting the below error. ERROR -  command.mvexpand: output will be truncated at 1000 results due to excessive memory usage. Memory threshold of 500 MB as co... See more...
While using the mvexpand command, i am getting the below error. ERROR -  command.mvexpand: output will be truncated at 1000 results due to excessive memory usage. Memory threshold of 500 MB as configured in limits.conf /[mvexpand]/max_mem_usage_mb has been reached.   Question 1- How can i resolve the above error ? Question 2 -  Is there any other alternative command of mvexpand ?
We are getting the error below for all indexes, but there is no detail in all search. Rawdata journal is missing in the bucket             clush -w splunk-idx1 /data/splunk/bin/splunk generate-hash... See more...
We are getting the error below for all indexes, but there is no detail in all search. Rawdata journal is missing in the bucket             clush -w splunk-idx1 /data/splunk/bin/splunk generate-hash-files -bucketPath /data/splunk/var/lib/Splunk
I have installed Microsoft Office 365 Reporting Add-on for Splunk and configured with AD app with correct permission. But it keeps quite with 403. Below is the error that we are getting from /opt/spl... See more...
I have installed Microsoft Office 365 Reporting Add-on for Splunk and configured with AD app with correct permission. But it keeps quite with 403. Below is the error that we are getting from /opt/splunk/var/log/splunk/ta_ms_o365_reporting_ms_o365_message_trace_oauth.log     2022-08-15 14:38:06,042 ERROR pid=17034 tid=MainThread file=base_modinput.py:log_error:316 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/lib/splunktaucclib/modinput_wrapper/base_modinput.py", line 140, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace_oauth.py", line 355, in collect_events get_events_continuous(helper, ew) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace_oauth.py", line 96, in get_events_continuous message_response = get_messages(helper, microsoft_trace_url) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace_oauth.py", line 74, in get_messages raise e File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace_oauth.py", line 66, in get_messages r.raise_for_status() File "/opt/splunk/lib/python3.7/site-packages/requests/models.py", line 943, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: for url: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2022-08-10T14:38:05.092475Z'%20and%20EndDate%20eq%20datetime'2022-08-10T15:38:05.092475Z'       
We have output of 2 queries in terms of disk usage. One is from DELL and one is rom Huawei index. Dell Query:  |`cluster_overview(isisyd)` | table _time stats.key lnn stats.value | search (stats.... See more...
We have output of 2 queries in terms of disk usage. One is from DELL and one is rom Huawei index. Dell Query:  |`cluster_overview(isisyd)` | table _time stats.key lnn stats.value | search (stats.key="ifs.bytes.avail" OR stats.key="ifs.bytes.used" OR stats.key="ifs.ssd.bytes.free" OR stats.key="ifs.ssd.bytes.used") | eval Usage = case('stats.key'="ifs.bytes.avail","HDD Available",'stats.key'="ifs.bytes.used","HDD Used",'stats.key'="ifs.ssd.bytes.free","SSD Available",'stats.key'="ifs.ssd.bytes.used","SSD Used")  | `bytes_to_gb_tb_pb('stats.value')` |  eval Usage = Usage . " (in GB)" | stats latest(bytes_gb) AS Space by Usage Huawei query:  index="huawei_storage"             | fields DeviceModel,Version,WWN,SN,TotalCapacity,UsableCapacity,UsedCapacity,DataProtection,FreeCapacity             | dedup SN             | table DeviceModel,Version,WWN,SN,TotalCapacity,UsableCapacity,UsedCapacity,DataProtection,FreeCapacity Attached SS suggests the current individual output. End goal is to have one single table combining both Huawei and Dell storage capacity information.  Any help is appreciated.