All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey all, I'm trying to separate out the IP address (Source Network Address:) from the Windows event Message field. I'm trying to find every instance of authentication for a certain user acct (includi... See more...
Hey all, I'm trying to separate out the IP address (Source Network Address:) from the Windows event Message field. I'm trying to find every instance of authentication for a certain user acct (including services & IIS app pools), but I'm having to parse it from the logs I'm getting from our Domain controllers. I'm pretty new to Splunk so my search is fairly basic. This is all I have so far, and I haven't been able to find info on whether that rex field should equal the new field I'm trying to create, or the field I'm searching, or what: index=* user=username EventCode=4740 OR EventCode=4648 OR EventCode=4672 OR EventCode=4624 | rex field=ServerIP "Source Network Address:(?<Message>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" So what I'm trying to do: Search Windows event log "Message" field > Find string "Source Network Address:ipaddress" Create new field with Value being above IP address   Appreciate any help!  
The calculation has to be made on Team Availability, taking a value of 96000 reduces the current Time Required  value and display in the next row in Team Availability, the recent Team Availabilit... See more...
The calculation has to be made on Team Availability, taking a value of 96000 reduces the current Time Required  value and display in the next row in Team Availability, the recent Team Availability value must be taken for next subtraction with Time Required can be displayed  in the next row in Team Availability, so it continues for all Time Required valued. The last image is the expected one, Help me to rectify my doubt and share the query  
I am looking to run a search and filter out whitelisted exceptions in a lookup file.  2 of the fields could contain multiple values though. Here's the search I'm using:   index=microsoft365 source... See more...
I am looking to run a search and filter out whitelisted exceptions in a lookup file.  2 of the fields could contain multiple values though. Here's the search I'm using:   index=microsoft365 sourcetype IN (azure:aad:signin, o365:management:activity) (action=success OR status=success) NOT Operation=UserLoginFailed | eval user_id=lower(user_id) | dedup src user_id date_month | iplocation src | search NOT Country IN ("United States", "Canada") | lookup local=t asn ip AS src | lookup nonUSlogins.csv ca.user_id AS user_id OUTPUT a.country a.ticket a.user_id | table user_id src date_month Country Region City asn autonomous_system a.user_id a.country     I tried using a match but found you can't use match if there are multiple values in a single field. Here is an example result currently: user_id src  date_month Country Region City asn autonomous_system a.user_id a.country user1 1.1.1.1 june Albania Tirana District Tirana     user1 user1 Albania Canada user1 2.2.2.2 june Germany Land Berlin Berlin     user1 user1 Albania Canada   I'm trying to eliminate results where the value for user_id matches a.user_id (values in this filed will be the same when there are multiple) AND the value of Country matches one of the countries listed in a.country   I would expect to see this in the end: user_id src  date_month Country Region City asn autonomous_system a.user_id a.country user1 2.2.2.2 june Germany Land Berlin Berlin     user1 user1 Albania Canada  
Hello, i´m looking to get this result between each start /end time. hope you could help me For example: Start time Endtime Su M Tu W Th F Sa 2021/07/01  2021/07/17 2 2 2 2 ... See more...
Hello, i´m looking to get this result between each start /end time. hope you could help me For example: Start time Endtime Su M Tu W Th F Sa 2021/07/01  2021/07/17 2 2 2 2 3 3 3 2021/07/05  2021/07/20 2 3 3 2 2 2 2   Thanks in advance
Hi I have log file like this: 2021-07-15 00:00:01,869 INFO APP.InEE-p1-1234567 [AppListener] Receive Message[A123]: Q[p1.APP], IID[null], Cookie[{"NODE":"0000aa000"}] . . 2021-07-15 00:00:01,988... See more...
Hi I have log file like this: 2021-07-15 00:00:01,869 INFO APP.InEE-p1-1234567 [AppListener] Receive Message[A123]: Q[p1.APP], IID[null], Cookie[{"NODE":"0000aa000"}] . . 2021-07-15 00:00:01,988 INFO APP.InEE-p1-1234567 [AaaPowerManager] Send Message [X0000A0000] to [APP.p2] with IID[null], LTE[00000] . . 2021-07-15 00:00:11,714 INFO APP.InE-p2-9876543 [AppListener] Receive Message[Y000000Z00000]: Q[p2.APP], IID[null], Cookie[null] . . 2021-07-15 00:00:11,747 INFO APP.InEE-P2-9876543_CLIENT.InEE-p1-1234567 [AaaPowerManager] Send Message [A123] to [APP.p1] with IID[null], LTE[00000] . .     want to calculate duration of each transaction like this ("Send Message"-"Receive Message"="duration") 00:00:11,747  -  00:00:01,869  = 00:00:09:878 output exception: id                      duration 1234567       00:00:09:878 any idea? Thanks
Hi all,    I'm a very new splunk user so am learning as i go - i simply want to connect splunk enterprise to this api:https://api.beta.ons.gov.uk/v1  and start interrogating the data.   I have in... See more...
Hi all,    I'm a very new splunk user so am learning as i go - i simply want to connect splunk enterprise to this api:https://api.beta.ons.gov.uk/v1  and start interrogating the data.   I have installed splunk_app_db_connect and am following this guidance https://www.progress.com/tutorials/jdbc/connect-to-any-rest-api-from-splunk-enterprise   But am getting an error  Unable to initialize modular input "server" defined in the app "splunk_app_db_connect": Introspecting scheme=server: script running failed (exited with code 1)..   and looking at the log file I am getting this :  2021-07-16T12:07:42+0100 [WARNING] [settings.py], line 107: java home auto detection failed   I'm struggling to understand what else to tweak to resolve as its a desktop version of splunk and i can't see any other apps i am meant to have running in the background.   any ideas? or is there a better app to use to ingest api data  
Is there a corresponding utility according to SendToSplunk for Linux? (Splunk Universal Forwarder is oversized for my requirement) See https://helgeklein.com/free-tools/sendtosplunk-send-text-data-s... See more...
Is there a corresponding utility according to SendToSplunk for Linux? (Splunk Universal Forwarder is oversized for my requirement) See https://helgeklein.com/free-tools/sendtosplunk-send-text-data-splunk-tcp-port/ SendToSplunk – Send Text Data to a Splunk TCP Port  
Hello * how can i overwrite the default eval definition for field app in props.conf? default/props.conf   ... EVAL-app = "Blue Coat ProxySG" ...   I try to overwrite this field with following i... See more...
Hello * how can i overwrite the default eval definition for field app in props.conf? default/props.conf   ... EVAL-app = "Blue Coat ProxySG" ...   I try to overwrite this field with following in local/props.conf   ... FIELDALIAS-app = x_bluecoat_application_name as app ...   We use a distributed Environment so i changed this in SH and HF app. But no change to the results. What am i doing wrong?
Hi Splunk Community. I have an alert, which runs a query regularly, for example hourly 24*7*365. If the alert is triggered, an email is sent to a distrubution list. I need two distinct distrbution ... See more...
Hi Splunk Community. I have an alert, which runs a query regularly, for example hourly 24*7*365. If the alert is triggered, an email is sent to a distrubution list. I need two distinct distrbution lists on this alert. One distribution list should receive the email during working hours, Monday 7am to 6pm. The other distribution list should receive the email outside working hours. Is there a simple way to do it with just one alert, or is it better to create two identical alerts?   Thanks Michal 
Hi, I don't know if it is possible, but I would like to specify the time range of a join subsearch from a calculated value. I have a similar log record and query: Log record:   myField=abc, coll... See more...
Hi, I don't know if it is possible, but I would like to specify the time range of a join subsearch from a calculated value. I have a similar log record and query: Log record:   myField=abc, collectionTimeEpoch=1626358999, maxDurationInSeconds=10, items=[id=00000000-00000000-00000000-00000000#content=123,id=myId2#content=456]   The query is similar to the following:   index="..." sourcetype="..." myField=abc | sort -_time | head 1 | eval itemList=split(items,",") | mvexpand itemList | rex field=itemList "(?<id>[-\w\d]+)#content=(?<content>[-\w\d]+)" | eval start=(collectionTimeEpoch-maxDurationInSeconds) | join type=left id [search earliest=-2d@d index="..." sourcetype="..." someField=someValue ]   I would like to replace earliest=-2d@d  to something like earliest=start, but that is not working. I have also tried   | join type=left id [search earliest=[stats count | eval earliest=(collectionTimeEpoch-maxDurationInSeconds) |fields earliest ] index="..." sourcetype="..." someField=someValue ]   Could you help me with this? Thanks in advance
Good morning, I am looking on generating a search to find the 1% slowest requests from IIS logs however I am not sure if this is possible, just wondered if anyone has done something similar before? ... See more...
Good morning, I am looking on generating a search to find the 1% slowest requests from IIS logs however I am not sure if this is possible, just wondered if anyone has done something similar before?   Thanks   Joe
High CPU utilization observed for splunkd and python3.7 processes on Splunk HF after Splunk Enterprise upgrade from 7.x to 8.1.4 version. Any help would be appreciated. Tq.
Hi,  First off, apologies if this is the wrong forum to post this but I am stuck and need help. I currently have a test environment set up as below. Symantec SEPM is sending syslog to a vip load b... See more...
Hi,  First off, apologies if this is the wrong forum to post this but I am stuck and need help. I currently have a test environment set up as below. Symantec SEPM is sending syslog to a vip load balancer which will then forward to either one of two HF.  Flow is as follows: Symantec SEPM > LB > HF   Configuration as shown below: Symantec SEPM version 14.3 RU1 with the following syslog configuration Syslog IP:  VIP of Load Balancer Syslog dest port: TCP 514 Syslog Line Separator: LF   LB is configured to forward the logs to HF via port 9997   Issue: Currently, the issue is that the risk logs used to be sending over previously but seem to stop now.   If I have missed out anything, please let me know.   Any feedback is greatly appreciated.    Regards, Mikhael
Hey guys quick question.  is it possible for me to use the rest api to do a search query, but not receive a stream of objects, but rather, batch the results into an array of objects? query: ser... See more...
Hey guys quick question.  is it possible for me to use the rest api to do a search query, but not receive a stream of objects, but rather, batch the results into an array of objects? query: services/search/jobs/export?output\_mode=json&search=savedsearch infraportal\_login\_history user=mrodrig1&summarize=true Currently receiving a stream of objects like this:  {"preview":false,"offset":0,"result":{"Computer_Name":"computername","_time":"2021-07-14 13:01:07.000 EDT"}} {"preview":false,"offset":1,"result":{"Computer_Name":"computername","_time":"2021-07-14 17:01:08.000 EDT"}} {"preview":false,"offset":2,"lastrow":true,"result":{"Computer_Name":"computername","_time":"2021-07-15 16:01:08.000 EDT"}} would prefer to have something like [{result},{result2}]
I have a large Splunk & ES environment and use DMC daily. Are there a series of SPL that would help me perform such tasks. Thank u in advance.
Hi, I noticed that the dashboard studio supports a customized text any where on the dashboard as showed below the "view all" .  However, if I choose to use dashboard studio, then I assume I lost ... See more...
Hi, I noticed that the dashboard studio supports a customized text any where on the dashboard as showed below the "view all" .  However, if I choose to use dashboard studio, then I assume I lost capability to modify the simple XML code or use JS scripts on that dashboard. Is it possible to have such a drill-down text on the dashboard not using dashboard studio? Or if it's possible to modify the dashboard studio code to achieve more customized functionalities like those provided by XML?  Thank you!
I have configured stream addon on UF and specified the location of stream app on SH, as per the docs. On tcpdump, I can see traffic going back and forth between SH and UF, however, I dont see the UF... See more...
I have configured stream addon on UF and specified the location of stream app on SH, as per the docs. On tcpdump, I can see traffic going back and forth between SH and UF, however, I dont see the UF coming up as a forwarder on the Stream app under the "distributed forwarder management" page. In docs, there is no mention of how to troubleshoot this type of connection. Can somebody please help with this ?
What is a proper way of upgrading over 70 Apps and Add-ons to the new versions in Splunk Enterprise and ES?
Is there a way to configure the management port, which is being used to access the REST API, to use the TLS certs we have from DigiCert?   I have server.conf set up with serverCert pointing to the ... See more...
Is there a way to configure the management port, which is being used to access the REST API, to use the TLS certs we have from DigiCert?   I have server.conf set up with serverCert pointing to the location of that file.  However, when I check using openssl on that port we get the certificate that ships with Splunk despite setting up server.conf to use the certificate from DigiCert.   Note, web access does correctly use the DigiCert certificate.  It's just access over the managment port doesn't.
I have a few reports created in the Splunk Ent. & would like to clone them to ES so they don't have to be re-created. Thank u