All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

@lknecht_splunk Hi Splunk,   My add on is using setup.xml to create a view for taking url and api key , a .js to validate those inputs and python script to do request to store credentials. Now i n... See more...
@lknecht_splunk Hi Splunk,   My add on is using setup.xml to create a view for taking url and api key , a .js to validate those inputs and python script to do request to store credentials. Now i need them to convert to view and using your Developer Guidance setup view as a template where I see that you guys are using javascript to do all work. How do I setup.xml to convert into view without rewriting the entire app
{"line":{"log_type":"testlog","log_version":"1.0.0","service":"test","version":"1.0.0","timestamp":"2021-10-01T22:24:01.038Z","custom_data":{"info_to_log":"disneyworld {\"message\":\" Incorrect team ... See more...
{"line":{"log_type":"testlog","log_version":"1.0.0","service":"test","version":"1.0.0","timestamp":"2021-10-01T22:24:01.038Z","custom_data":{"info_to_log":"disneyworld {\"message\":\" Incorrect team id, cannot create relationship. \",\"status_code\":500}  \"{\\\"id\\\":\\\"fa351cd91130\\\",\\\"type\\\":\\\"test_add\\\",\\\"correlation_id\\\":\\\"e79ed142\\\",\\\"event_data\\\":{\\\"bar_id\\\":\\\"12312312\\\",\\\"mickeymouse_id\\\":\\\"12123212\\\",\\\"minniemouse_id\\\":\\\"121231\\\",\\\"disney_id\\\":\\\"1212312\\\",\\\"role\\\":\\\"cartoon\\\"}}\""},"level":"info","message":"disneyerror"},"source":"stdout","tag":"asawescw123"}   can anyone help with the query to create table where I can see the columns message (inside info_to_log), bar_id, mickeymouse_id, minniemouse_id  
Hello.  I'm having a bit of an issue that I cant' figure out.  I have a query that references an inputlookup and produces a line chart. However, it only shows the dates for which there are values.   ... See more...
Hello.  I'm having a bit of an issue that I cant' figure out.  I have a query that references an inputlookup and produces a line chart. However, it only shows the dates for which there are values.   How do I modify this query to be able to show a complete line chart where all months of the year show and to force a value to equal 100 even if there is no value for a particular field.  Please help.  Thank you.   | inputlookup something.csv | search business_service IN('"SomeService"') the_month="2020-*" | stats sum(imp_duration) as YTD_Impact_Hours values(inc_no) as inc_no by business_service the_month | eval YTD_Hours=6600 | eval Score=round(((1-(YTD_Impact_Hours / YTD_Hours))*100),3) | eval expected_score=100 | stats values(expected_score) as "Expected Score" values(Score) as Score sum(YTD_Impact_Hours) as YTD_Impact_Hours by the_month inc_no | eval the_month=the_month."-01" | eval the_month=strptime(the_month, "%Y-%m-%d") | sort 14 -the_month | sort the_month | convert timeformat="%b-%y" ctime(the_month) | rename business_service as "Business Service" YTD_Impact_Hours as "Impact Hours" the_month as Date inc_no as Incident Score as "Actual Score" | fields - "Impact Hours" | fillnull "Actual Score" value=100
Hello,  I been trying to figure this out for the past 2 days now and I cannot seem to find which config file is making my host send logs to index=main. I have 4 other machines in forward management ... See more...
Hello,  I been trying to figure this out for the past 2 days now and I cannot seem to find which config file is making my host send logs to index=main. I have 4 other machines in forward management that have the same application sending logs to the correct index except for this system. It seems to send Application logs to index=main but all other security logs go to index=windows. I double checked the apps folder on that machine  and compared it with a machine that is not sending to index main and also did the same in the etc/system/local.     This is a single deployment we have a single search head and single indexer. 
Hello  Cam someone assist on how to do a search like below for multiple samaccountnames ?  ideally from a txt file or CSV ? so to reference the file in the search .        <SamAccountName> Even... See more...
Hello  Cam someone assist on how to do a search like below for multiple samaccountnames ?  ideally from a txt file or CSV ? so to reference the file in the search .        <SamAccountName> EventCode=4624 | table _time,EventCode,src,user,Logon_Type    
We're receiving the error below repeatedly when trying to integrate Splunk with Crowdstrike using the provided Splunkbase add-ons. The API credentials are correct and the Crowdstrike tenant is popula... See more...
We're receiving the error below repeatedly when trying to integrate Splunk with Crowdstrike using the provided Splunkbase add-ons. The API credentials are correct and the Crowdstrike tenant is populated with some detections and incidents. Please advise.   2020-10-01 18:51:36,664 DEBUG pid=7904 tid=MainThread file=connectionpool.py:_new_conn:959 | Starting new HTTPS connection (1): api.us-2.crowdstrike.com:443 2020-10-01 18:51:38,016 DEBUG pid=7904 tid=MainThread file=connectionpool.py:_make_request:437 | https://api.us-2.crowdstrike.com:443 "POST /oauth2/token HTTP/1.1" 201 1200 2020-10-01 18:51:38,017 INFO pid=7904 tid=MainThread file=base_modinput.py:log_info:295 | Successfully retrieved OAuth2 API token 2020-10-01 18:51:38,344 DEBUG pid=7904 tid=MainThread file=connectionpool.py:_make_request:437 | https://api.us-2.crowdstrike.com:443 "GET /sensors/entities/datafeed/v2?appId=DSR9a&format=json HTTP/1.1" 404 196 2020-10-01 18:51:38,345 ERROR pid=7904 tid=MainThread file=base_modinput.py:log_error:309 | Unable to access data streams. Error Code: 404 - Error Message: resource not found 2020-10-01 18:51:38,348 ERROR pid=7904 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "E:\Splunk\etc\apps\TA-crowdstrike-falcon-event-streams\bin\ta_crowdstrike_falcon_event_streams\aob_py3\modinput_wrapper\base_modinput.py", line 128, in stream_events self.collect_events(ew) File "E:\Splunk\etc\apps\TA-crowdstrike-falcon-event-streams\bin\crowdstrike_event_streams.py", line 71, in collect_events input_module.collect_events(self, ew) File "E:\Splunk\etc\apps\TA-crowdstrike-falcon-event-streams\bin\input_module_crowdstrike_event_streams.py", line 346, in collect_events crowdstrike_client() File "E:\Splunk\etc\apps\TA-crowdstrike-falcon-event-streams\bin\input_module_crowdstrike_event_streams.py", line 234, in crowdstrike_client num_feeds = len(response['resources']) UnboundLocalError: local variable 'response' referenced before assignment    
I have a query that returns the following result.   Status Count 200 800 404 34 400 20 500 12   And I would like to transform it to something like this Count(200) Coun... See more...
I have a query that returns the following result.   Status Count 200 800 404 34 400 20 500 12   And I would like to transform it to something like this Count(200) Count(404) Count(400)  Count(500) 800 34 20 12   Is this possible? Thanks.
Just for a sake of knowledge, how much amount of _internal data is generated. Incase my daily indexing is of 6TB??? Will it 15% of 6TB? I know it doesn't consume my license...
Does the use of HECs require traversing the public internet to get data into Splunk? Example, if my customer was the government and the data passed through Firehose into Splunk is to not touch the in... See more...
Does the use of HECs require traversing the public internet to get data into Splunk? Example, if my customer was the government and the data passed through Firehose into Splunk is to not touch the internet. 
I am trying to read a file that gets replaced once in every 24 hours and has the same exact name and has almost similar data.   File name is like :  e:\abcd-app-files\data-translation.txt   Since... See more...
I am trying to read a file that gets replaced once in every 24 hours and has the same exact name and has almost similar data.   File name is like :  e:\abcd-app-files\data-translation.txt   Since the data in the file is almost similar everyday, I want to use crcSalt, but crcSalt=<SOURCE> will not work, since my file has the same name every day.  Is there a way to add a dynamic string (maybe a timestamp) to the crcSalt ? so that splunk willread the file everytime even with similar data and same name.    NOTE : I can't change the name of the file to have a timestamp in it. Also, the file content doesn't consist a timestamp inside the data. 
Hello everyone, i have a set of correlation search (about 250) to deploy in different Splunk ES. Instead of writing them one by one in every Splunk, i would create an application with all those cor... See more...
Hello everyone, i have a set of correlation search (about 250) to deploy in different Splunk ES. Instead of writing them one by one in every Splunk, i would create an application with all those correlation search and later deploy it to the Splunk. It is sufficient to popolate savedsearch.conf file with one stanza per correlation search? Thanks in advance, Luca    
I have a service that is 1 to many microservice so I am aggregating the backend calls into a single entry.       { "time": "2020-11-11 10:10:12.123", "app": "myApp", "env": "test", "httpMe... See more...
I have a service that is 1 to many microservice so I am aggregating the backend calls into a single entry.       { "time": "2020-11-11 10:10:12.123", "app": "myApp", "env": "test", "httpMethod": "GET", "request":{ "attributes": [{"key1":"val1"},{"key2":"val2"}] }, "response":{"UI.text":"hello world - Success"}, "totalDuration": 543, "backEndCalls": [ { "method": "GET", "url": "https://myservices.foo/backend1", "reqeust":{ "attributes": [{"key1":"val1"},{"key2":"val2"}] }, "response": { "display.text":"hello world" }, "timing": 123 }, { "method": "GET", "url": "https://myservices.foo/backend2", "reqeust":{ "attributes": [{"key1":"val1"},{"key2":"val2"}] }, "response": { "list":[ {"item1":"name","price":1.00,"tax":0.055}, {"item2":"name","price":10.00,"tax":0.55}, {"item3":"name","price":100.00,"tax":5.5} ] }, "timing": 200 }, { "method": "POST", "url": "https://myservices.foo/backend3", "reqeust":{ "body": { "userinfo": [{"key1":"val1"},{"key2":"val2"}] } }, "response": { "success" : true }, "timing": 220 } ] }       I am trying to list this information in one table.  so that I would have a set of columns for my request followed by a set of columns for each of the backend services in the array.    
Hi All, I am looking for splunk query to detect vertical and horizontal port scan in the Infra. Any help in this regard will be appreciable. Here is query in layman language. Vertical Port Scan: 1... See more...
Hi All, I am looking for splunk query to detect vertical and horizontal port scan in the Infra. Any help in this regard will be appreciable. Here is query in layman language. Vertical Port Scan: 1. External IP performing scan on single system for multiple ports Horizontal Port Scan: 1.  External IP is scanning multiple systems for querying single port.      
I need some documentation in configuring schedule job for exporting data from splunk to Hadoop using Splunk Hadoop connect. What are the prerequisites. I am new to hadoop. What are the different ways... See more...
I need some documentation in configuring schedule job for exporting data from splunk to Hadoop using Splunk Hadoop connect. What are the prerequisites. I am new to hadoop. What are the different ways we can export data from splunk to hadoop.
i have one excel file with multiple columns but i want to find outliers if column 1> column 3  its fail(should detect  as outlier), is it possible to do it ? is there specific SPL logic should i use ... See more...
i have one excel file with multiple columns but i want to find outliers if column 1> column 3  its fail(should detect  as outlier), is it possible to do it ? is there specific SPL logic should i use for this ?(both columns unit in second)     
I have data which sometimes has timestamps and sometimes doesn't. I want those events without timestamp to use file mod time (it's a file monitor input), which is what the documentation leads me to b... See more...
I have data which sometimes has timestamps and sometimes doesn't. I want those events without timestamp to use file mod time (it's a file monitor input), which is what the documentation leads me to believe is the default behavior if TIME_FORMAT doesn't match (https://docs.splunk.com/Documentation/Splunk/8.0.6/Data/HowSplunkextractstimestamps#How_Splunk_software_assigns_timestamps).   However, I see my data sometimes matched to the last known timestamp instead, accompanied by these kind of messages in _internal:   WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (23) characters of event. Defaulting to timestamp of previous event   How do I explicitly tell Splunk to not fall back to the previous timestamp and instead use file modification time for events without timestamps?
When testing Splunk Cloud, I'm getting an invalid certificate errors. Depending on which application I use, I may see... "The remote certificate is invalid according to the validation procedure." "... See more...
When testing Splunk Cloud, I'm getting an invalid certificate errors. Depending on which application I use, I may see... "The remote certificate is invalid according to the validation procedure." "Your connection isn't private Attackers might be trying to steal your information from prd-p-dkgtr.splunkcloud.com (for example, passwords, messages, or credit cards). NET::ERR_CERT_AUTHORITY_INVALID "   Test URL: https://prd-p-dkgtr.splunkcloud.com:8088/    
UPDATE: I initially reported this on the 'Cisco Secure eStreamer Client (f.k.a Firepower eNcore) Add-On for Splunk' 4.0.7, but that entire TA turned out to be unstable. More recently I've tested out... See more...
UPDATE: I initially reported this on the 'Cisco Secure eStreamer Client (f.k.a Firepower eNcore) Add-On for Splunk' 4.0.7, but that entire TA turned out to be unstable. More recently I've tested out version 4.6 which is stable, but it also has the same problem I reported a few months ago. The problem is the FirePower TA (Encore) stopped correctly reporting IP addresses in IPv4 format starting with version 4.x for rec_type=112 rec_type_desc="Correlation Event" events. For example, instead of reporting 10.0.0.1 the TA will now report 167772161 (i.e. The number can be converted back to standard IPv4 with a lot of help).   Apparently very few people actually use these events, all other events are fine. I have tested this on multiple servers with multiple  Defense Centers. It is definitely an issue with the FirePower v4 TA. Even with a Cisco entitlement, Cisco offers almost no support for this TA.  So, to reiterate, for rec_type=112 rec_type_desc="Correlation Event" FirePower TA v3.x: src_ip=10.0.0.1 dest_ip=10.0.0.2 FirePower TA v4.x:  src_ip=167772161 dest_ip=167772162  
We have an alert configured to send email when the number of results is >20 in 5min but since this is a timechart based search, Splunk is counting the time records as results instead of counting the ... See more...
We have an alert configured to send email when the number of results is >20 in 5min but since this is a timechart based search, Splunk is counting the time records as results instead of counting the actual events. Our requirement is to trigger email when the number of events is >20 in 5 min and include the timechart as well as the actual raw events in the email, is this possible ?
Hello Got this while, unsuccessfully, setting up the connection to isilon via the app:     2020-09-30 16:18:26,812 ERROR 8308 - Dell Isilon Error: Exception: [HTTP 500] Splunkd internal error; [{... See more...
Hello Got this while, unsuccessfully, setting up the connection to isilon via the app:     2020-09-30 16:18:26,812 ERROR 8308 - Dell Isilon Error: Exception: [HTTP 500] Splunkd internal error; [{'text': "\n In handler 'isilon': Data could not be written: /nobody/TA_EMC-Isilon/inputs/isilon://172.30.44.185::/platform/1/statistics/current?keys=node.ifs.bytes.in.rate,node.ifs.bytes.out.rate&devid=all/sourcetype: emc:isilon:rest", 'code': None, 'type': 'ERROR'}]