All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone! I've just signed up into splunk and was going to start the splunk cloud trial, but when I got into the "start trial" page and clicked on the button, it showed me the "Internal Server Err... See more...
Hi everyone! I've just signed up into splunk and was going to start the splunk cloud trial, but when I got into the "start trial" page and clicked on the button, it showed me the "Internal Server Error" screen! I would love to understand why. Also, on a attempt to understand what was happening, I got this message from the broken request:    An internal error was detected when creating the stack. Please try again later.     Any ideas? Thanks in advance. 
  How do I use rex to extract the backdoor info and the IP addresses so that I can display this info in my splunk dashboard?  
I'm running Cisco AMP events input on Splunk 8 on python 2.7.17 and received the following error after configuring the api keys. ImportError: splunklib.modularinput
I'm running Cisco AMP events input on Splunk 8 on python 2.7.17 and received the following error after configuring the api keys. ImportError: splunklib.modularinput  
issues with splunk download on windows 10 laptop
i have these log entries, and I'm trying to extract the underlined data as "Business_Process"   i'm using the below regex, on geg101 it extracts just fine but on splunk it exctracts a huge chun... See more...
i have these log entries, and I'm trying to extract the underlined data as "Business_Process"   i'm using the below regex, on geg101 it extracts just fine but on splunk it exctracts a huge chunk.  rex field=_raw "\Drun\.name\D:\D(?<Business_Process>.+)\D,\Drun.u"   i get below result in splunk  
This is a dashboard panel that i've created by extracting virus information from a log file   This is my search query   I actually want to see only the virus names and the IP addresses. S... See more...
This is a dashboard panel that i've created by extracting virus information from a log file   This is my search query   I actually want to see only the virus names and the IP addresses. So, it should look like this: How would I change my regex to obtain this result?
My query below generates a table, which appears as follows. The issue that I'm trying to resolve is being able to populate non-existent values with "No Data", as shown in the 2020-08-11 column. Can s... See more...
My query below generates a table, which appears as follows. The issue that I'm trying to resolve is being able to populate non-existent values with "No Data", as shown in the 2020-08-11 column. Can someone provide some assistance on how to do this? I have used fillnull and filldown, but have not been successful. Service ID Resource Name Transaction Name Priority Service Area Consumer 2020-08-12 2020-08-11 2020-08-10 2020-08-09 ID1 GET Transaction1 1 Area1 App1 3   4 0 ID2 PUT Transaction2 2 Area2 App2 8   2 5   index=test_index_1 sourcetype=test_sourcetype_2 | eval epoch_Timestamp=strptime(Timestamp, "%Y-%m-%dT%H:%M:%S.%3QZ")-14400 | rename "Transaction Name" as trans_name, "Application Name" as application_name, "Status Code" as status_code | eval service_id=case(Verb="GET" AND trans_name="Transaction1" AND application_name="APP1", "ID1", Verb="GET" AND trans_name="Transaction2" AND application_name="App2", "ID2", Verb="PUT" AND trans_name="Transaction2" AND application_name="App2", "ID3", 1=1, "Unqualified") | where service_id!="Unqualified" | eval Priority=case(Verb="GET" AND trans_name="Transaction1" AND application_name="APP1", "2", Verb="GET" AND trans_name="Transaction2" AND application_name="App2", "2", Verb="PUT" AND trans_name="Transaction2" AND application_name="App2", "1", 1=1, "Unqualified") | where Priority!="Unqualified" | eval service_area=case(Verb="GET" AND trans_name="Transaction1" AND application_name="APP1", "Area1", Verb="GET" AND trans_name="Transaction2" AND application_name="App2", "Area2", Verb="PUT" AND trans_name="Transaction2" AND application_name="App2", "Member", 1=1, "Unqualified") | where service_area!="Unqualified" | eval date_reference=strftime(epoch_Timestamp, "%Y-%m-%d") | stats count(eval(status_code)) as count by service_id, Verb, trans_name, Priority, service_area, application_name, date_reference | eval combined=service_id."@".Verb."@".trans_name."@".Priority."@".service_area."@".application_name."@" | xyseries combined date_reference count | rex field=combined "^(?<service_id>[^\@]+)\@(?<Verb>[^\@]+)\@(?<trans_name>[^\@]+)\@(?<Priority>[^\@]+)\@(?<service_area>[^\@]+)\@(?<application_name>[^\@]+)\@$" | fillnull value="0" | table service_id, Verb, trans_name, Priority, service_area, application_name [ makeresults | addinfo | eval time = mvappend(relative_time(info_min_time,"@d"),relative_time(info_max_time,"@d")) | fields time | mvexpand time | makecontinuous time span=1d | eval time=strftime(time,"%F") | reverse | stats list(time) as time | return $time ] | rename service_id as "Service ID", Verb as "Resource Name", trans_name as "Transaction Name", Priority as "Priority", service_area as "Service Area", application_name as "Consumer"
Hi All, I am currently ingesting plain text files with a filename format as follows -  4d618da0-48f0-430d-9c9f-10c6e5ba6971_Batch1_20200810.5415.finish Each day a new files are created with the da... See more...
Hi All, I am currently ingesting plain text files with a filename format as follows -  4d618da0-48f0-430d-9c9f-10c6e5ba6971_Batch1_20200810.5415.finish Each day a new files are created with the day's date and a sequential number before the .finish e.g. 4d618da0-48f0-430d-9c9f-10c6e5ba6971_Batch1_yyyymmdd.nnnn.finish   When the files are ingested, the source name extension is (intermittently) changed from ending 'nnnn.finsh' to '.xml' e.g. 4d618da0-48f0-430d-9c9f-10c6e5ba6971_Batch1_20200810.xml We are running a distributed environment with 4 indexers.  This trait is being seen across all indexers and on files being ingested from different servers.   As I rely on checking for '.finish' in the source, is there a way of setting props or transforms to stop the file extension being changed? I hope this makes some sense.  Thanks in advance for assistance.      
Hi, Basically i want to revoke write access to users but due to business requirements i am supposed to give access to users to run their search queries in search and reporting page in Splunk. Probl... See more...
Hi, Basically i want to revoke write access to users but due to business requirements i am supposed to give access to users to run their search queries in search and reporting page in Splunk. Problem:If i give access to search and reporting page to run the queries,user will be able to create dashboards ,alerts reports ,since dashboard & alert menus are present in navigation bar of search and reporting page .If i remove the access to search and reporting page user will not be able to run their query and see the output. Is there a way where i can restrict the access to users so that user should be able to run only their queries and not access dashboard menu? P.S.if i remove the default xml code for dashboard in navigation bar ,then it wont be able to access to admin as well. Any solution on this issue would be great! Thanks,  
I have logs that say both contact and non contact. I would like to distinguish them in a search with the complete "non contact" but eliminate all that just say "contact". 
I have a log file that has 3 different types of headers. There is a unique id field per line notifying me of what the headers should be. Is there a way to have splunk regex match the line with the un... See more...
I have a log file that has 3 different types of headers. There is a unique id field per line notifying me of what the headers should be. Is there a way to have splunk regex match the line with the unique id then assign headers to that line. There will be 3 different regexs matches with unique headers.
Hello Splunk members! I currently have a search that produces "Users" connecting to certain "hosts" whereas the status of connection is "created". The output is displayed as | timechart span=d dc(us... See more...
Hello Splunk members! I currently have a search that produces "Users" connecting to certain "hosts" whereas the status of connection is "created". The output is displayed as | timechart span=d dc(users) by host Here is a sample of how my search currently look like:     index=search "remote access" (host="1.1.1.1" OR host="2.2.2.2" OR host="3.3.3.3") | eval host = case(host="1.1.1.1", "server1", host="2.2.2.2","server2", host="3.3.3.3", "server3") | rex field=_raw "\shas\sbeen\s\(?P<status>(created))\." | rex field=msg "\(SPI=\s(?P<session_id>\w+)\)" | timechart span=d dc(users) by host       Now my aim is to have a lookup file (User-site.csv) the lookup has 2 fields. 1. Unique user ID 2. Site code I would like to filter out users based on site. For example, I want my timechart to only produce results for site "ABC" where users connected to. The user ID on the lookup is not 100% a match to (users) field on the initial search so I think I need to have something like "like" command to compare similar characteristics from my lookup user ID field with users and then filter out the events based on site code (i.e. ABC) I would appreciate your assistance with this, I have previously made this work but this time the lookup fields are not 100% match so I can't figure a way to use a LIKE command here. Thanks MJA411
User Guide for ESCU version 3.0.5 (https://docs.splunk.com/Documentation/ESSOC/3.0.5/user/ConfigureSplunkEnterpriseSecurity(ES)touseMLTK) refers to ES User Guide version 5.2.2 (https://docs.splunk.co... See more...
User Guide for ESCU version 3.0.5 (https://docs.splunk.com/Documentation/ESSOC/3.0.5/user/ConfigureSplunkEnterpriseSecurity(ES)touseMLTK) refers to ES User Guide version 5.2.2 (https://docs.splunk.com/Documentation/ES/5.2.2/Install/ImportCustomApps#Import_add-ons_with_a_different_naming_convention) on how to install Custom Apps, in this case MLTK.  The problem is, the same ES User Guide for current ES version (6.2.0) does not exist. I tried to follow the ESCU guide and configure "App Imports Update" but was unable to edit "update_es" input. Shouldn't this be updated? What is the correct configuration of MLTK for ESCU?
Hi All, We are planning to ingest the SQL login success and failure logs into Splunk. So  in the logs there are lot of events but we want to ingest only the "Login succeeded for user" and "Login fai... See more...
Hi All, We are planning to ingest the SQL login success and failure logs into Splunk. So  in the logs there are lot of events but we want to ingest only the "Login succeeded for user" and "Login failed for user" information alone. So kindly help to provide the regex for the same. Sample events looks like below: 2020-08-10 06:00:00.89 Logon Login succeeded for user 'ad\SQL_abcde123'. Connection made using Windows authentication. [CLIENT: <local machine>] 2020-08-10 06:00:01.59 Logon Login succeeded for user 'xyz'. Connection made using SQL Server authentication. [CLIENT: xxx.xxx.xxx.xxx] 2019-08-10 05:00:01.59 Logon Login failed for user ''. Reason: An attempt to login using SQL authentication failed. Server is configured for Windows authentication only. [CLIENT: xxx.xxx.xx.xxx]    
We're using a REST API to connect to a case / monitoring system and retrieve any data newer than the last run. This data is a per-case JSON construct that contains all current information about the c... See more...
We're using a REST API to connect to a case / monitoring system and retrieve any data newer than the last run. This data is a per-case JSON construct that contains all current information about the case such as its ID, status, timestamps, associated alert counts, etc. So, Splunk events are written when: A new case opened. An existing case's status is changed. An existing case's assigned user is changed. etc I'm having trouble with one dashboard panel in particular - a single value to show the number of alerts that are currently open and a sparkline for trends / history. Because of the way that the data is structured and transferred, I've only been able to do one or the other. The following search works pretty well when new cases are opened (because the number of returned events increases between each search) but it "forgets" cases when they're closed (because the search filter causes the number of returned events to decrease).   index="<client>" case_id | dedup 1 case_id sortby -_time | search (status=new OR status=in_progress) | append [| makeresults | eval alert_count_total = 0] | reverse | timechart sum(alert_count) as alert_count_total | addcoltotals | filldown alert_count_total     I guess it needs some kind of "if the cases are open then sum the alerts but if not then set the alert sum to 0" logic so I created the following search which seems to get what I want but only separately - the Single View visualisation seems to be incompatible:   index="<client>" case_id | dedup 1 case_id sortby -_time | eval openorclosed=if(status="new" OR status="in_progress", "Open", "Closed") | eval alert_count_open=case(openorclosed="Open", alert_count, openorclosed="Closed", 0) | stats sparkline(sum(alert_count)), sum(alert_count_open)   Also, I've found SPL2's branch command which seems very promising but I don't know how to use datasets, etc which seem to be required. Thanks in advance.
When I try to launch the Fundamentals 1 Course it doesn't work. Does anyone else have the same issue
I want to set up a real time alerting.  when setting up alert query, alert type is auto populated to "Scheduled alert". could anyone help me with this.
Hi everyone, It might me a silly question The simplified case. 3 artifacts within the event with 3 different IP addresses: 192.168.0.1 192.168.0.2 8.8.8.8 I'm trying to check if ... See more...
Hi everyone, It might me a silly question The simplified case. 3 artifacts within the event with 3 different IP addresses: 192.168.0.1 192.168.0.2 8.8.8.8 I'm trying to check if IP is local and make separate queries to the crowdstrike (it could be any other app).   Each query should use filter parameter local_ip:"{0}", so I'm using a Format gadget.       I'm getting  a error during the execution because "Format" function returns joined value: local_ip:"192.168.0.1, 192.168.0.2".  And then it launches crowdstrike app just ones with this filter. But it should be 2 different request with a separate IP address in each one. I tried to use as a filter parameter for the crowdstrike app: "format_2:formatted_data.*" - returns None "format_2:formatted_data" - returns "192.168.0.1, 192.168.0.2" as one string So, how to make 2 different requests here? Thanks.
Hello, i have installed the Reporting Add-on and fixed the Error 500. The Add-On is now connecting to Mircosoft and is always going to the next endpoint url. if i manually open "https://reports.off... See more...
Hello, i have installed the Reporting Add-on and fixed the Error 500. The Add-On is now connecting to Mircosoft and is always going to the next endpoint url. if i manually open "https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?%5C$filter=StartDate%20eq%20datetime%272020-08-07T08:55:35.511356Z%27%20and%20EndDate%20eq%20datetime%272020-08-07T09:55:35.511356Z%27" -> i am getting logs.  If i manually open the next Endpoint url -> i am getting logs. Looks good from my perspective.   Still i am getting no data in my index. The index is working, we are getting data in via another addon (sourcetype="o365:management:activity")   can anyone assist me in finding the problem?   thank you in advance   br Andreas 2020-08-12 08:55:35,493 INFO pid=24279 tid=MainThread file=splunk_rest_client.py:_request_handler:105 | Use HTTP connection pooling 2020-08-12 08:55:35,494 DEBUG pid=24279 tid=MainThread file=binding.py:get:677 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/config/TA_MS_O365_Reporting_checkpointer (body: {}) 2020-08-12 08:55:35,495 DEBUG pid=24279 tid=MainThread file=connectionpool.py:_new_conn:959 | Starting new HTTPS connection (1): 127.0.0.1:8089 2020-08-12 08:55:35,499 DEBUG pid=24279 tid=MainThread file=connectionpool.py:_make_request:437 | https://127.0.0.1:8089 "GET /servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/config/TA_MS_O365_Reporting_checkpointer HTTP/1.1" 200 5516 2020-08-12 08:55:35,500 DEBUG pid=24279 tid=MainThread file=binding.py:new_f:73 | Operation took 0:00:00.006632 2020-08-12 08:55:35,501 DEBUG pid=24279 tid=MainThread file=binding.py:get:677 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/config/ (body: {'search': 'TA_MS_O365_Reporting_checkpointer', 'offset': 0, 'count': -1}) 2020-08-12 08:55:35,504 DEBUG pid=24279 tid=MainThread file=connectionpool.py:_make_request:437 | https://127.0.0.1:8089 "GET /servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/config/?search=TA_MS_O365_Reporting_checkpointer&offset=0&count=-1 HTTP/1.1" 200 7417 2020-08-12 08:55:35,504 DEBUG pid=24279 tid=MainThread file=binding.py:new_f:73 | Operation took 0:00:00.003792 2020-08-12 08:55:35,507 DEBUG pid=24279 tid=MainThread file=binding.py:get:677 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/data/TA_MS_O365_Reporting_checkpointer/at_o365_phs_obj_checkpoint (body: {}) 2020-08-12 08:55:35,510 DEBUG pid=24279 tid=MainThread file=connectionpool.py:_make_request:437 | https://127.0.0.1:8089 "GET /servicesNS/nobody/TA-MS_O365_Reporting/storage/collections/data/TA_MS_O365_Reporting_checkpointer/at_o365_phs_obj_checkpoint HTTP/1.1" 404 140 2020-08-12 08:55:35,511 DEBUG pid=24279 tid=MainThread file=base_modinput.py:log_debug:288 | Start date: 2020-08-07 08:55:35.511356, End date: 2020-08-07 09:55:35.511356 2020-08-12 08:55:35,511 DEBUG pid=24279 tid=MainThread file=base_modinput.py:log_debug:288 | Endpoint URL: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?\$filter=StartDate eq datetime'2020-08-07T08:55:35.511356Z' and EndDate eq datetime'2020-08-07T09:55:35.511356Z' 2020-08-12 08:55:35,511 INFO pid=24279 tid=MainThread file=setup_util.py:log_info:117 | Proxy is not enabled! 2020-08-12 08:55:35,513 DEBUG pid=24279 tid=MainThread file=connectionpool.py:_new_conn:959 | Starting new HTTPS connection (1): reports.office365.com:443 2020-08-12 08:55:37,001 DEBUG pid=24279 tid=MainThread file=connectionpool.py:_make_request:437 | https://reports.office365.com:443 "GET /ecp/reportingwebservice/reporting.svc/MessageTrace?%5C$filter=StartDate%20eq%20datetime'2020-08-07T08:55:35.511356Z'%20and%20EndDate%20eq%20datetime'2020-08-07T09:55:35.511356Z' HTTP/1.1" 200 None 2020-08-12 08:55:37,116 DEBUG pid=24279 tid=MainThread file=base_modinput.py:log_debug:288 | Next URL is https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$skiptoken=1999 2020-08-12 08:55:37,117 DEBUG pid=24279 tid=MainThread file=base_modinput.py:log_debug:288 | Endpoint URL: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$skiptoken=1999 2020-08-12 08:55:37,117 INFO pid=24279 tid=MainThread file=setup_util.py:log_info:117 | Proxy is not enabled! 2020-08-12 08:55:37,118 DEBUG pid=24279 tid=MainThread file=connectionpool.py:_new_conn:959 | Starting new HTTPS connection (1): reports.office365.com:443 2020-08-12 08:55:38,633 DEBUG pid=24279 tid=MainThread file=connectionpool.py:_make_request:437 | https://reports.office365.com:443 "GET /ecp/reportingwebservice/reporting.svc/MessageTrace?$skiptoken=1999 HTTP/1.1" 200 None 2020-08-12 08:55:38,745 DEBUG pid=24279 tid=MainThread file=base_modinput.py:log_debug:288 | Next URL is https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$skiptoken=3999 2020-08-12 08:55:38,745 DEBUG pid=24279 tid=MainThread file=base_modinput.py:log_debug:288 | Endpoint URL: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$skiptoken=3999 2020-08-12 08:55:38,746 INFO pid=24279 tid=MainThread file=setup_util.py:log_info:117 | Proxy is not enabled!