All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

<span>This call to java.lang.Runtime.exec() contains a command injection flaw. The argument to the function is constructed using untrusted input. If an attacker is allowed to specify all or part of t... See more...
<span>This call to java.lang.Runtime.exec() contains a command injection flaw. The argument to the function is constructed using untrusted input. If an attacker is allowed to specify all or part of the command, it may be possible to execute commands on the server with the privileges of the executing process. The level of exposure depends on the effectiveness of input validation routines, if any. The first argument to exec() contains tainted data from the variables (new String\[...\]). The tainted data originated from an earlier call to AnnotationVirtualController.vc_annotation_entry.</span> <span>Validate all untrusted input to ensure that it conforms to the expected format, using centralized data validation routines when possible. When using blocklists, be sure that the sanitizing routine performs a sufficient number of iterations to remove all instances of disallowed characters. Most APIs that execute system commands also have a "safe" version of the method that takes an array of strings as input rather than a single string, which protects against some forms of command injection.</span> <span>References: <a href="https://cwe.mitre.org/data/definitions/78.html">CWE</a> <a href="https://owasp.org/www-community/attacks/Command_Injection">OWASP</a></span> This fields value from where I need to create 2 separate fields  First field call flaw extract  from <span> "This call to java.lang.Runtime.exec() contains a command injection flaw. The argument to the function is constructed using untrusted input. If an attacker is allowed to specify all or part of the command, it may be possible to execute commands on the server with the privileges of the executing process. The level of exposure depends on the effectiveness of input validation routines, if any. The first argument to exec() contains tainted data from the variables (new String\[...\]). The tainted data originated from an earlier call to AnnotationVirtualController.vc_annotation_entry.  "<\span>   second fields call rededication start from <span> Validate all untrusted input to ensure that it conforms to the expected format, using centralized data validation routines when possible. When using blocklists, be sure that the sanitizing routine performs a sufficient number of iterations to remove all instances of disallowed characters. Most APIs that execute system commands also have a "safe" version of the method that takes an array of strings as input rather than a single string, which protects against some forms of command injection.<\span>
Hello, We've setup our Splunk Search Head to download snapshots from ThreatStream API directly, while troubleshooting, we observed that it was downloading the snapshots from hxxps://ts-optic.s3.ama... See more...
Hello, We've setup our Splunk Search Head to download snapshots from ThreatStream API directly, while troubleshooting, we observed that it was downloading the snapshots from hxxps://ts-optic.s3.amazonaws.com/snapshots/... but then had issues processing it.         2022-11-03 02:01:47,394 18860 ERROR threatstream_app - threatstream_kvstore> Autologin succeeded, but there was an auth error on next request. Something is very wrong. 2022-11-03 02:01:47,443 18860 ERROR threatstream_app - threatstream_kvstore> Failed at add_kvs_batch - sz == 1, collection_name: ts_md5, data: [{'date_last': '2016-02-21T14:52:32.000Z', 'id': '0', '_key': '99929352'}] 2022-11-03 02:01:47,443 18860 ERROR threatstream_app - threatstream_kvstore> Autologin succeeded, but there was an auth error on next request. Something is very wrong. 2022-11-03 02:01:47,464 18860 ERROR threatstream_app - threatstream_kvstore> Failed at add_kvs_batch - sz == 1, collection_name: ts_md5, data: [{'date_last': '2016-02-21T14:52:37.000Z', 'id': '0', '_key': '99929603'}] 2022-11-03 02:01:47,464 18860 ERROR threatstream_app - threatstream_kvstore> Autologin succeeded, but there was an auth error on next request. Something is very wrong. 2022-11-03 02:01:48,677 18860 INFO threatstream_app - ioc_loader> 193571 items with id="0" saved to kvs: ts_md5 for deletion, time: 35505.908512592316 2022-11-03 02:01:48,678 18860 INFO threatstream_app - ioc_loader> 193571 items with id="0" saved to kvs: ts_md5 for deletion, time: 35505.908512592316 2022-11-03 02:01:49,059 18860 ERROR threatstream_app - ts_ioc_ingest> failed to download optic intelligence: Autologin succeeded, but there was an auth error on next request. Something is very wrong. 2022-11-03 02:01:49,059 18860 ERROR threatstream_app - ts_ioc_ingest> failed to download optic intelligence: Autologin succeeded, but there was an auth error on next request. Something is very wrong. 2022-11-03 02:01:49,933 18860 ERROR threatstream_app - ts_ioc_ingest> Traceback (most recent call last): File "/opt/splunk/etc/apps/threatstream/bin/splunklib/binding.py", line 290, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/apps/threatstream/bin/splunklib/binding.py", line 71, in new_f val = f(*args, **kwargs) File "/opt/splunk/etc/apps/threatstream/bin/splunklib/binding.py", line 622, in delete response = self.http.delete(path, self._auth_headers, **query) File "/opt/splunk/etc/apps/threatstream/bin/splunklib/binding.py", line 1169, in delete return self.request(url, message) File "/opt/splunk/etc/apps/threatstream/bin/splunklib/binding.py", line 1255, in request raise HTTPError(response) splunklib.binding.HTTPError: HTTP 401 Unauthorized -- call not properly authenticated During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/threatstream/bin/splunklib/binding.py", line 232, in _handle_auth_error yield File "/opt/splunk/etc/apps/threatstream/bin/splunklib/binding.py", line 301, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/apps/threatstream/bin/splunklib/binding.py", line 71, in new_f val = f(*args, **kwargs) File "/opt/splunk/etc/apps/threatstream/bin/splunklib/binding.py", line 622, in delete response = self.http.delete(path, self._auth_headers, **query) File "/opt/splunk/etc/apps/threatstream/bin/splunklib/binding.py", line 1169, in delete return self.request(url, message) File "/opt/splunk/etc/apps/threatstream/bin/splunklib/binding.py", line 1255, in request raise HTTPError(response) splunklib.binding.HTTPError: HTTP 401 Unauthorized -- call not properly authenticated During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/threatstream/bin/ts_ioc_ingest.py", line 284, in download_iocs TmDataManager(splunka=remote_splunk, logger=logger).process_data() File "/opt/splunk/etc/apps/threatstream/bin/ts/tm_data_manager.py", line 176, in process_data self._process_data() File "/opt/splunk/etc/apps/threatstream/bin/ts/tm_data_manager.py", line 245, in _process_data self.load_from_lookup_files() File "/opt/splunk/etc/apps/threatstream/bin/ts/tm_data_manager.py", line 508, in load_from_lookup_files iocs.load_iocs() File "/opt/splunk/etc/apps/threatstream/bin/ts/lookup_iocs.py", line 404, in load_iocs util.utils.remove_0_id_values(self.kvsm, kvs) File "/opt/splunk/etc/apps/threatstream/bin/util/utils.py", line 143, in remove_0_id_values remove_delete_id_values(kvsm, ioc_kvs_name, 'id', '0') File "/opt/splunk/etc/apps/threatstream/bin/util/utils.py", line 146, in remove_delete_id_values kvsm.delete_kvs(kvs, {id_name : delete_id_value}) File "/opt/splunk/etc/apps/threatstream/bin/util/kvs_manager.py", line 286, in delete_kvs collection.data.delete(query=json.dumps(query_dict)) File "/opt/splunk/etc/apps/threatstream/bin/splunklib/client.py", line 3678, in delete return self._delete('', **({'query': query}) if query else {}) File "/opt/splunk/etc/apps/threatstream/bin/splunklib/client.py", line 3631, in _delete return self.service.delete(self.path + url, owner=self.owner, app=self.app, sharing=self.sharing, **kwargs) File "/opt/splunk/etc/apps/threatstream/bin/splunklib/binding.py", line 301, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/lib/python3.7/contextlib.py", line 130, in __exit__ self.gen.throw(type, value, traceback) File "/opt/splunk/etc/apps/threatstream/bin/splunklib/binding.py", line 235, in _handle_auth_error raise AuthenticationError(msg, he) splunklib.binding.AuthenticationError: Autologin succeeded, but there was an auth error on next request. Something is very wrong.         So I guess "Something is wrong"? but what? Anyone knows a solution or at least the cause of this?  
Hello! So I  just have a dashboard for practicing different searches and XML Code, etc (MacOS). and I was trying to include a random static jpg picture in the source code in the beginning of my dashb... See more...
Hello! So I  just have a dashboard for practicing different searches and XML Code, etc (MacOS). and I was trying to include a random static jpg picture in the source code in the beginning of my dashboard. This is what I have in source code based on similar questions here on Splunk Answers:   <html> /*this is where all my styling is for font, size, colors, alignment, etc. for a title and I wanted to include a jpg right after*/ <img src="static/app/search/images/picture.jpg/"> </html>   I'm not sure what exact file path to place my jpg or code I would need to get this to work. There are two different answers I found here on Splunk Answers:   splunk/etc/apps/search/appserver/static/images/picture.jpg splunk/apps/search/static/images/picture.jpg I tried both ways to no avail. (I created the folder 'images')
Hey splunk community! I need to create a search query to find instances where the time between a "Cache set' log from my application and a "Cache miss" log is not equal to a certain value(The conf... See more...
Hey splunk community! I need to create a search query to find instances where the time between a "Cache set' log from my application and a "Cache miss" log is not equal to a certain value(The configured TTL), for any cache key. I've attempted starting with a particular key(sampleKey) but the end goal is to tabularize these results for all keys. Here's my attempt to calculate the time difference for a sampleKey, between the set and miss times : index= authzds-e2e* "setting value into memcached" "key=sampleKey" [search index=authzds-e2e* "Cache status=miss" "key=sampleKey" | stats latest(_time) as missTime ] | stats earliest(_time) as setTime| eval timeDiff=setTime-missTime My goal is to calculate the difference between consecutive set and miss events, key-wise (not earliest/latest as in the above query)
Hi Team, I am not able to upload a local log file to my local Splunk getting below error in splunkd.log OneShotWriter failed to insert data to parsingQueue. Timed out in 5000 milliseconds. ca... See more...
Hi Team, I am not able to upload a local log file to my local Splunk getting below error in splunkd.log OneShotWriter failed to insert data to parsingQueue. Timed out in 5000 milliseconds. can someone please help me with this. Thanks
Hi Splunk Experts, I tried to create the search but can't be successful in it, I need a search, if in case the interface of an Cisco switch is down and doesn't came to up state within 5 minutes it s... See more...
Hi Splunk Experts, I tried to create the search but can't be successful in it, I need a search, if in case the interface of an Cisco switch is down and doesn't came to up state within 5 minutes it should throw an alert. The logs are as below screenshot. I tried to write the query only for one scenario i.e if the switch status changed state to down i can get the alert but i can't merge the both message fields back to back. Required Alert scenario: when first log containing the message field with changed state to down appears and within 5 minutes the next log containing the message field with changed state to up doesn't appear then i must get an alert.     Thanks in advance.....  
Hi ! I am using line chart at my dashboard, and I'm trying to make the axis x label constant. for example to set all labels for the last year, and in intervals of months. in addition I don't wa... See more...
Hi ! I am using line chart at my dashboard, and I'm trying to make the axis x label constant. for example to set all labels for the last year, and in intervals of months. in addition I don't want that every dot in the line will be represent with a label.  as you can see in this pic(from documentation):   if you will follow the blue line(please ignore the yellow line) int the x axis we are not getting label per each "dot"(value).  
Hi Guys, We are migrating our Splunk Authentication from LDAP to OKTA SAML. We have about 40 odd SAML groups setup in Splunk. Also, each SAML group has a different role. Now, there are many users... See more...
Hi Guys, We are migrating our Splunk Authentication from LDAP to OKTA SAML. We have about 40 odd SAML groups setup in Splunk. Also, each SAML group has a different role. Now, there are many users which are in multiple SAML groups. Question is how will Splunk decide what role that users takes? I know that LDAP authentication will give precedence to the Connection Order of the LDAP strategy meaning if a user is in strategy#6 and #7, he will be assigned the role which is assigned to LDAP strategy #6.  I don't see this option in the SAML. Any help would be appreciated. Thanks, Neerav   
Hi there, appreciate if anyone could help me with these query. I am trying to pump local file to splunk using fluentbit. The Splunk is currently https and secure. I kept encountering error messag... See more...
Hi there, appreciate if anyone could help me with these query. I am trying to pump local file to splunk using fluentbit. The Splunk is currently https and secure. I kept encountering error message of unexpected EOF, I am not sure what have I done wrongly in the fluent-bit.config file.     This is the screenshot of the splunk's general settting   Below is the fluent-bit.config that I used with the fluent-bit.exe.. [INPUT] Name tail Tag taglog Path C:\*.json [OUTPUT] Name splunk Match * Host localhost Port 443 Splunk_Token <The HTTP Event Collector token generated in Splunk Web> TLS On TLS.Verify On http_user <The username login to Splunk Web> http_passwd <The password used to login to Splunk Web> splunk_send_raw On     when i set the "TLS.Verify" to Off, it will have 303 http status code
Hi Team, We have configured the Custom Email for Database Alerts. In the Email Alerting I need to put the hostname of the database  and its type whether it's windows or Linux? Can anyone help me wi... See more...
Hi Team, We have configured the Custom Email for Database Alerts. In the Email Alerting I need to put the hostname of the database  and its type whether it's windows or Linux? Can anyone help me with the variables that I need to include in email template in order to fulfill the above requirement. Thanks, Eswari
Hi there, Anyone pls advice how to onboard VM logs and Bastion logs from Azure to Splunk. I have installed the add on microsoft cloud services  but am only receiving  metrics log from these bastion... See more...
Hi there, Anyone pls advice how to onboard VM logs and Bastion logs from Azure to Splunk. I have installed the add on microsoft cloud services  but am only receiving  metrics log from these bastion event hub and VM event hub. Please let me know how to get VM logs and Bastion logs from azure to Splunk thanks in advance
Hi, How to display what values are missing in my lookup table comparing to actual data?   I have one field with lists of users from my csv file.  I took a specific range of users and trying to ... See more...
Hi, How to display what values are missing in my lookup table comparing to actual data?   I have one field with lists of users from my csv file.  I took a specific range of users and trying to find out which of users are missing from my csv file. I cannot find a solution, can someone help me out?   My query. |inputlookup filename.csv |search Username IN (user A,  user B, User C, etc.) |dedup Username |fields Username
Hello community, I'm having a very specific problem and I can't find a solution after several days of attempts, all of which failed. I will explain the situation to you: we have a Splunk OnCall wh... See more...
Hello community, I'm having a very specific problem and I can't find a solution after several days of attempts, all of which failed. I will explain the situation to you: we have a Splunk OnCall which serves as our hypervisor and which reports the incidents of several of our monitoring tools. Our users acknowledge alerts directly to Splunk OnCall for incident support. We then pull all this data into Splunk Enterprise (via an official plugin). For several weeks, I have been trying to make the delta between the number of alerts of a type (based on its title) and the number of times this alert has been acknowledged. For this, OnCall sends the same information back to Enterprise several times but with different details: - when an alert appears on OnCall, Enterprise has the info with the status "UNACKED". - when an alert is acknowledged, it goes up with the status "ACKED" - when an alert is over, it goes up with the status "RESOLVED". So I can have up to 3 times the same information in Enterprise.   Now that the (long) scene is set, here is my problem: I manage to output the RESOLVED and ACKED alerts in the same table, in order to make a delta between the number of RESOLVED and the number of ACKED but I cannot "align" the information. I use this search :       index=oncall_prod routingKey=* | search currentPhase=RESOLVED | dedup incidentNumber | rename entityDisplayName as Service | stats count by Service | appendcols [ search index=oncall_prod routingKey=* | search currentPhase=ACKED | dedup incidentNumber | rename entityDisplayName as Service_ACKED | stats count by Service_ACKED | rename count AS ACKED] | eval matchfield=coalesce(Service,Service_ACKED) | table Service count Service_ACKED ACKED       and the result is the following: On the screen, you can see my problem: for some alerts, there has never been an acknowledgment and suddenly, there is a shift on the lines. And when I do a delta by a simple calculation, it does it row by row so the values ​​don't mean anything because it's not comparing the right things. I tried several methods, found here and there on the forum, to properly align my table, including the following search:       index=oncall_prod routingKey=* | search currentPhase=RESOLVED | dedup incidentNumber | rename entityDisplayName as Service | stats count by Service | eval matchfield=Service | join matchfield [ search index=oncall_prod routingKey=* | search currentPhase=ACKED | dedup incidentNumber | rename entityDisplayName as Service_ACKED | stats count by Service_ACKED | rename count AS ACKED | eval matchfield=Service_ACKED] | table Service count Service_ACKED ACKED       but I can't because the result shows me ONLY the lines with both a RESOLVED and ACKED status, leaving the alerts that only had the RESOLVED status undisplayed: How to make the acknowledgments face the correct RESOLVED lines? And how to leave the rows without acknowledgments empty, with a value of 0? If you have an idea, I'm interested. Best regards, Rajaion  
I have a field called Identifier which has values of server names.  I need to check the server names first character is number or not. Could you please let help me, used regex but not able to achieve... See more...
I have a field called Identifier which has values of server names.  I need to check the server names first character is number or not. Could you please let help me, used regex but not able to achieve it. thanks eg ., server names 01234server1 01256server2 2345server3 Abcserver bcdserver      
Hi Splunkers, a customer demanded us a Splunk architecture proposal regarding his own environment. I never designed Splunk archs, so I searched on web and I found some valid documents, like the "Sp... See more...
Hi Splunkers, a customer demanded us a Splunk architecture proposal regarding his own environment. I never designed Splunk archs, so I searched on web and I found some valid documents, like the "Splunk Validate Architecture"; the point is that the total architecture (Splunk + Environment to monitor) is quite particular. The desiderd customer architecture is the following one: Data sources -> Mulesoft -> Splunk Cloud SaaS -> Mulesoft Additional info: 1. No agent must be installed in the Mulesoft environment. 2. No ES required So, the flow data are the following ones: 1. All data sources send their logs to Mulesoft environment 2. Mulesoft send the data to Splunk; so, from a Splunk prospective, Mulesoft it is the only "big" one data source. 3. Splunk make correlation and, if an alarm trigger, send back data to Mulesoft So, my open points here are 2. 1. Due Mulesoft it is the only one data source, even it is a big one, and has its own HA management systems (so it is not in charge of Splunk environment to manage this task), I think I have no reason to use a forwarder as an "intermediate host" and I can send logs directly to Splunk with Token mechanism and Log4j configs in Mulesoft; are there some reasons I didn't get that could validate the use of a Forwarder between Mulesoft Environment and Splunk one? 2. If an alarm trigger, I have to forward back it to Mulesoft system. I know I can perform some response action when an alarm trigger: send an email, execute a script, and so on. What could be the best action to send back data to Mulesoft?
My data currently looks like this: Date Name 2022-11-01 ABC 2022-11-01 DEF 2022-11-01 GHI 2022-11-02 JKL 2022-11-02 MNO 2022-11-03 PQR 2022-11-03 STU 2022-11-03 V... See more...
My data currently looks like this: Date Name 2022-11-01 ABC 2022-11-01 DEF 2022-11-01 GHI 2022-11-02 JKL 2022-11-02 MNO 2022-11-03 PQR 2022-11-03 STU 2022-11-03 VWX 2022-11-03 YZ1   I would like it to look like this: Date Name 2022-11-01 ABC 2022-11-01 DEF 2022-11-01 GHI     2022-11-02 JKL 2022-11-02 MNO     2022-11-03 PQR 2022-11-03 STU 2022-11-03 VWX 2022-11-03 YZ1   I need an empty row to be inserted whenever the Date differs from the value before it.
Hello Folks, I building an add-on. There I am using a multipleselect field at the input whose value should depend on another dropdown field.  What exactly I want is, I want to hide multipleselect... See more...
Hello Folks, I building an add-on. There I am using a multipleselect field at the input whose value should depend on another dropdown field.  What exactly I want is, I want to hide multipleselect field. This field should only be visible according to the previous dropdown value. Is there any way that we can do this in globalConfig.json file. Thank you 
Hi team, I have "file_size" in my  extracted fields and the values are 1.56 KB,5.03 MB, 1.06 B. and those values are strings. I need a query how to convert string to Integer to sum(file_size)  i ne... See more...
Hi team, I have "file_size" in my  extracted fields and the values are 1.56 KB,5.03 MB, 1.06 B. and those values are strings. I need a query how to convert string to Integer to sum(file_size)  i need to use max and min and sum commands for the file_size. Please help me on the same.   Thanks
Hi. We are going to have a datasource with some sensitive data, where there is a requirement, that only the owner of a specific event is allowed to see it. The events will have the user as part o... See more...
Hi. We are going to have a datasource with some sensitive data, where there is a requirement, that only the owner of a specific event is allowed to see it. The events will have the user as part of the data, that field can be created as an indexed field. I will, of course, have the data in a separate index, and thought I might be able to use restriction to limit access so that the user only can search in data where the field user matches the logged on user. I can see it is possible to use the token $env.user$ in a dashboard, but I would really like to use it in the restrictions part of the role, so it automatically will use the logged on user in the restriction.   Any help will be much appreciated. Kind regards las