All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello I'm trying to capture the ip address from the PXE log example shown. I want to also trim any preceding 0 so I can use the ip as an index. I feel I'm pretty close on this one. Log sample: Op... See more...
Hello I'm trying to capture the ip address from the PXE log example shown. I want to also trim any preceding 0 so I can use the ip as an index. I feel I'm pretty close on this one. Log sample: Operation: BootRequest (1) Addr type: 1 Addr Len: 6 Hop Count: 0 ID: 0001E240 Sec Since Boot: 65535 Client IP: 018.087.789.006 Your IP: 000.000.000.000 Server IP: 178.187.178.874 Relay Agent IP: 000.000.000.000 Addr: 87:f3:78:a5:78:b2: Magic Cookie: 63878263 Splunk Search: index="*********" source="D:\\SMS_DP$\\sms\\logs\\SMSPXE.log" | rex field=_raw "Addr: (?<Time>\d.{16})" | rex field=_raw "Addr: (?<PXE_MAC>\d.{16})" | rex field=_raw "Type=97 UUID: (?<PXE_UUID>\d.{33})" | rex field=_raw "Client IP: (?<PXE_IP>\d.{14})" | rex field=PXE_IP "^(?<PXE_IP_MOD>\b0+(\d+))" | rex field=_raw " date=\"(?<PXE_Date>\d.{9})" | rex field=_raw "><time=\"(?<PXE_Time>\d.{7})" | rex field=_raw "Type=53 Msg Type: (?<PXE_Traffic>\w.{4})" | rex field=_raw "Type=93 Client Arch: (?<PXE_Arch>\w.{3})" | where isnotnull(PXE_Traffic) | rename host as PXE_Host | table PXE_Host,PXE_Traffic,PXE_MAC,PXE_IP,PXE_IP_MOD,PXE_UUID,PXE_Arch,PXE_Date,PXE_Time | sort by PXE_Date, PXE_Time desc Regex: regex101: build, test, and debug regex
I'm having trouble working out how to authenticate to the Splunk Cloud ACS API using a local account. The doco suggests you can do this: https://docs.splunk.com/Documentation/SplunkCloud/8.2.2106/Ad... See more...
I'm having trouble working out how to authenticate to the Splunk Cloud ACS API using a local account. The doco suggests you can do this: https://docs.splunk.com/Documentation/SplunkCloud/8.2.2106/Admin/ConfigureIPAllowList I can hit the API successfully with a SAML token, but I need to be able to use a local account to authenticate right now. Can anyone shed some light on how you're meant to auth with a local account? I've tried using the session token provided by the auth/login endpoint, and also Basic auth (user/pass), but neither work.
Greetings, I am setting up a new 8.2.2 environment, Red Hat 8.1 and trying to get Splunk to start on boot and to run under a different user than root. I can start it up manually under the "splunk" u... See more...
Greetings, I am setting up a new 8.2.2 environment, Red Hat 8.1 and trying to get Splunk to start on boot and to run under a different user than root. I can start it up manually under the "splunk" user without any problems but on boot, it does not. What I have done so far: $SPLUNK_HOME/bin/splunk enable boot-start -user splunk in /etc/init.d/splunk #!/bin/sh RETVAL=0 . /etc/init.d/functions splunk_start() { echo Starting Splunk... su - splunk -c '"/opt/splunk/bin/splunk" start --no-prompt --answer-yes' RETVAL=$? [ $RETVAL -eq 0 ] && touch /var/lock/subsys/splunk } splunk_stop() { echo Stopping Splunk... su - splunk -c '"/opt/splunk/bin/splunk" stop' RETVAL=$? [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/splunk } splunk_restart() { echo Restarting Splunk... su - splunk -c '"/opt/splunk/bin/splunk" restart' RETVAL=$? [ $RETVAL -eq 0 ] && touch /var/lock/subsys/splunk } splunk_status() { echo Splunk status: su - splunk -c '"/opt/splunk/bin/splunk" status' RETVAL=$? } case "$1" in start) splunk_start ;; stop) splunk_stop ;; restart) splunk_restart ;; status) splunk_status ;; esac exit $RETVAL   in /opt/splunk/etc/splunk-launcher.conf # Version 8.2.2 # Modify the following line to suit the location of your Splunk install. # If unset, Splunk will use the parent of the directory containing the splunk # CLI executable. # SPLUNK_HOME=/opt/splunk # By default, Splunk stores its indexes under SPLUNK_HOME in the # var/lib/splunk subdirectory. This can be overridden # here: # # SPLUNK_DB=/opt/splunk-home/var/lib/splunk # Splunkd daemon name SPLUNK_SERVER_NAME=Splunkd # If SPLUNK_OS_USER is set, then Splunk service will only start # if the 'splunk [re]start [splunkd]' command is invoked by a user who # is, or can effectively become via setuid(2), $SPLUNK_OS_USER. # (This setting can be specified as username or as UID.) # # SPLUNK_OS_USER SPLUNK_OS_USER=splunk in sudoers splunk ALL=(ALL) NOPASSWD: /opt/splunk/bin/splunk restart splunk ALL=(ALL) NOPASSWD: /opt/splunk/bin/splunk stop splunk ALL=(ALL) NOPASSWD: /opt/splunk/bin/splunk start splunk ALL=(ALL) NOPASSWD: /opt/splunk/bin/splunk status   Could it be an issue with SELinux? Thanks in Advance John
Hi I've upload a file with chinese name,the content(which is also in chinese character)can display and query normally,but source name in chinese characters display with messy code in web browser. i... See more...
Hi I've upload a file with chinese name,the content(which is also in chinese character)can display and query normally,but source name in chinese characters display with messy code in web browser. i've changed browsers and changed the CHARSET in prof.conf, both didn't fix it. Dose anyone know how to solve the issue,thks a lot.        
Hello guys, I need help building the query for this value to group it like the output I have given below. Current: apple1 apple-orange apple-yellow banna123 banna-red banna-orange Output: a... See more...
Hello guys, I need help building the query for this value to group it like the output I have given below. Current: apple1 apple-orange apple-yellow banna123 banna-red banna-orange Output: apple* banna*
Hello, unfortunately I am having to attempt to do a restore of copies of old db_* and rb_* structures that were basically rsync'd over time to some cold storage. I am noticing that things like ".buck... See more...
Hello, unfortunately I am having to attempt to do a restore of copies of old db_* and rb_* structures that were basically rsync'd over time to some cold storage. I am noticing that things like ".bucketManifest" don't exist. I am trying to restore it to  a net new indexer cluster with the index configured in indexes.conf. I am happy to do this on a standalone indexer if that's the right way to do this, assuming that this is even possible.   To be clear, i have all of the directories that are prefixed with rb_* and db_*, but nothing else. *EDIT* I actually only have db_*/rawdata/journal.gz and rb_*/rawdata/journal.gz Thanks
When I try to use Splunk Add-on for Cisco Meraki for my Access Points I get this API error in the logs: meraki.exceptions.APIError: networks, getNetworkEvents - 400 Bad Request, {'errors': ['product... See more...
When I try to use Splunk Add-on for Cisco Meraki for my Access Points I get this API error in the logs: meraki.exceptions.APIError: networks, getNetworkEvents - 400 Bad Request, {'errors': ['productType is not applicable to this network']} My Meraki organization has three networks, and only one of them has productTypes = "wireless", so when the add-on iterates through my networks, it aborts when it hits a network that has no matching productType, and the add-on is unable to retrieve events from my wireless network. Please advise how to fix this. Thank you!
I'm following the Line Chart example in the Studio Dashboard app in Splunk index=_internal _sourcetype IN ( splunk_web_access, splunkd_access) | timechart count by _sourcetype "viz_ZrQCy9wp": { ... See more...
I'm following the Line Chart example in the Studio Dashboard app in Splunk index=_internal _sourcetype IN ( splunk_web_access, splunkd_access) | timechart count by _sourcetype "viz_ZrQCy9wp": { "type": "viz.line", "options": { "fieldColours": { "splunk_web_access": "#FF0000", "splunkd_access": "#0000FF" } },   I cannot get it to set the field name colours in a timechart.  I'm having an issue with that on other searches, as well as the 2nd y axis settings not appearing to work. Has something changed with how Splunk handles charts in Studio Dashboard? Thanks
Hi, We have a custom search that should alert when a critical host, that we have defined in the search, is missing. The issue we're having is that we haven't been alerted on some of the hosts not ha... See more...
Hi, We have a custom search that should alert when a critical host, that we have defined in the search, is missing. The issue we're having is that we haven't been alerted on some of the hosts not having logs because the last time they had any logs was an age ago. When I change earliest=-1d to earliest=-1y the hosts I want appear but the search takes a longer time.  Is there a way to make it so for every host value specified, a stats line is created where I can fillnull the fields as appropriate? Here is the search: | tstats prestats=true count where index="*", (host="host01" OR host="host02" OR host="host_01" OR host="host_02") earliest=-1d latest=now by index, host, _time span=1h | eval period=if(_time>relative_time(now(),"-1d"),"current","last") | chart count over host by period | eval missing=if(last>0 AND (isnull(current) OR current=0), 1, 0) | where missing=1 | sort - current, missing | rename current as still_logging, last as old_logs, missing as is_missing
I have quiz values for 10 quizzes. Each quiz is a column and the values are 0-100 in each row. I am trying to just calculate the the average of each column and have that as a point on the line chart... See more...
I have quiz values for 10 quizzes. Each quiz is a column and the values are 0-100 in each row. I am trying to just calculate the the average of each column and have that as a point on the line chart with 0-100 as the y-axis and each quiz as an x-axis column.  For example. | chart avg(quiz_01) AS "Quiz 1 Average", avg(quiz_02) AS "Quiz 2 Average", avg(quiz_03) AS "Quiz 3 Average" But all of the points end up in the same column in the line chart. Thanks  
I have a field with values like below (a) (a,b) (c) (a,c)   I am trying to parse these values, and get stats like below    a 3 b 1 c 2
I've recently updated the Splunk_TA_windows from version 4.1.8 to version 8.12. As I went through the documentation I noticed there was a new setting under inputs.conf that mentioned to set "renderXm... See more...
I've recently updated the Splunk_TA_windows from version 4.1.8 to version 8.12. As I went through the documentation I noticed there was a new setting under inputs.conf that mentioned to set "renderXml=0" in order to keep WinEventLogs in "classic" or "friendly" mode.  After making that update to the TA's deployed to all UF's and to the Indexer Cluster I'm now getting the same event under both formats.  e.g., If I have an EventCode=4624 for a specific host, I run a search and I can see the same event (different format) with sources: XmlWinEventLog:Security AND WinEventLog:Security I only want the WinEventLogs in classic mode, don't need the XML at the moment.  If I set renderXml=true I ONLY get XmlWinEventlogs. Some Details: - I ran btool for inputs on a dev UF and I can see that renderXml=false - I ran btool for inputs in one indexer and I can see that renderXml=false - Splunk_TA_windows version 8.1.2  My inputs.conf file   [WinEventLog://Security] disabled = 0 renderXml = false       Does anyone have any idea why I'm still seeing both formats?   
All, I wanted to take the list of index hosts List that currently being index by splunk and then compare that list to a static List. And then showing what item in the static List is not being index.... See more...
All, I wanted to take the list of index hosts List that currently being index by splunk and then compare that list to a static List. And then showing what item in the static List is not being index. How would I approach this? 
Hi, i am currently working in a search to filter values based on a lookup table and i am having a difficult time with the backslash character ("\"). The search is the following:   index=<index> so... See more...
Hi, i am currently working in a search to filter values based on a lookup table and i am having a difficult time with the backslash character ("\"). The search is the following:   index=<index> source=source<source> access IN ([| inputlookup lookup_accesses.csv | mvcombine delim="\",\"" Accesses | nomv Accesses | eval Accesses = "\"" + Accesses + "\"" | return $Accesses]) | fields <fields>   The problem occurs when the data inside contains the backslash char ("\"), in this case it does not work and returns zero results. Otherwise if the data inside the lookup doesn't contain the backslash char it works fine. This lookup fields may contain file names and directories and we are trying to make it work for both cases. Any help will be appreciated. Regards. Javier.
Hello, My company has a sidebar that is consistent throughout all our other internal applications, and we were trying to get that same sidebar on our Splunk instance as well. We wanted to add a cust... See more...
Hello, My company has a sidebar that is consistent throughout all our other internal applications, and we were trying to get that same sidebar on our Splunk instance as well. We wanted to add a custom sidebar, with links to other pages from there (sort of looks like the bootstrap 5 sidebar https://getbootstrap.com/docs/5.0/examples/sidebars/), within all the apps (home/search/etc). Is there anyway this can be done? When I looked elsewhere for tips on styling the UI, most of them were for styling the dashboards themselves. Is there also a way to include a custom javascript file within that UI as well? I am new to splunk, so maybe I am not looking in the right areas.
Hello, Roxio Secure Burn stores a history of its burn logs to C:\ProgramData\Roxio Log Files I have a report set up in SPLUNK to monitor that location on all computers that have Roxio installed.  s... See more...
Hello, Roxio Secure Burn stores a history of its burn logs to C:\ProgramData\Roxio Log Files I have a report set up in SPLUNK to monitor that location on all computers that have Roxio installed.  source= "c:\\ProgramData\\Roxio Log Files\\*"  Most of the systems show up fine. However several system have files saved in that location on the local system that do not show in the SPLUNK report.  Those systems are visible for other reports, such as failed logons, reboots, etc. But nothing shows up for the report above. The permissions for that location are the same as systems that DO show up in Roxio.  I have adjusted the time to include the past 6 months, year, and all time. Nothing shows in the SPLUNK results, however I can see logs in the actual directory on the system itself.  Any ideas?  
I have a lookup table with CVE listed which I dont want to be in our report so we have made the lookup table and adding it to the search  | table Severity, "EC2 Instance ID", "EC2 Instance Name", "R... See more...
I have a lookup table with CVE listed which I dont want to be in our report so we have made the lookup table and adding it to the search  | table Severity, "EC2 Instance ID", "EC2 Instance Name", "Rules Package", Rule, CreatedAt, Links, title, description, recommendation, numericSeverity | lookup ignore_cve.csv But I am getting an error that " Error in 'lookup' command: Must specify one or more lookup fields." So Do I have add something else after ignore_cve.csv   Kindly help.@lookup
Hi, I am having difficulty in extracting key=value pairs from one of the auto extracted field. The problem is that, this field may contain just a text value but also could contain multiple key=valu... See more...
Hi, I am having difficulty in extracting key=value pairs from one of the auto extracted field. The problem is that, this field may contain just a text value but also could contain multiple key=value pairs in it, so whenever there are multiple key=value pairs in the event then I am not getting the desired results. Following are some of  my _raw events  - 2021-08-10T11:35:00.505 ip=10.1.10.10 id=1 event="passed" model="t1" conn="connmsg=\"controller.conn_download::message.clean\", file=\"/home/folder1/filename_8555c5s.ext\", time=\"21:22:02\", day=\"08/24/2021\""  2021-08-10T11:35:00.508 ip=10.1.10.10 id=1 event="running" model="t1" conn="connmsg=\"model.log::option.event.view.log_view_conn, connname=\"model.log::option.event.view.log_view_conn_name\", user=\"xyz\", remote_conn=10.23.55.54, auth_conn=\"Base\"" 2021-08-10T11:35:00.515 ip=10.1.10.10 id=1 event="failed" model="t1" conn="Failed to connect to the file for \"file_name\"" 2021-08-10T11:35:00.890 ip=10.1.10.10 id=1 event="extracting" model="t1" conn="connmsg=\"model.log::option.event.view.logout.message\", user=, job_id=65, report_name=",  path=\"{\"type\":1,\"appIds\":\"\",\"path\":\"2021-08-10T11:35:00+00:00_2021-08-10T12:35:00+00:00\\/ip_initiate\\/10.1.120.11\\/http_code\\/200\",\"restrict\":null}\"" 2021-08-10T11:36:00.090 ip=10.1.10.10 id=1 event="extracting" model="t1" conn="connmsg=\"model.log::option.event.view.audit.message, user=\"qic\\abc_pqr\, reason_msg=\"component.auth::message:unknown_user\", path=/abc/flows/timespan/2021-08-10T11:35:00+00:00_2021-08-10T12:35:00+00:00/ip_initiate/10.101.10.20/data.ext" 2021-08-10T11:36:00.380 ip=10.1.10.10 id=1 event="triggered" model="t1" conn="Rule 'Conn Web Service' was triggered by Indicator:'Conn Web Service'" 2021-08-10T11:36:00.880 ip=10.1.10.10 id=1 event="triggered" model="t1" conn="connmsg=\"model.log::option.event.report.finished\", user=, job_id=65, report_name=",  path=\"{\"type\":1,\"namespace\":\"flows\",\"appIds\":\"10,11,12\",\"path_bar\":\"[\\\"ip_initiate=10.1.120.11\\\"]\",\"2021-08-10T11:35:00+00:00_2021-08-10T12:35:00+00:00\\/ip_initiate\\/10.1.120.11\\/http_code\\/200\",\"restrict\":null}\"" The field which I am facing issue is "conn" field and I want data to be extracted in conn field in somewhat below manner -   conn \"controller.conn_download::message.clean\" model.log::option.event.view.log_view_conn Failed to connect to the file for \"file_name\" \"model.log::option.event.view.logout.message\" \"model.log::option.event.view.audit.message\" "Rule 'Conn Web Service' was triggered by Indicator:'Conn Web Service'" but currently its just extracting the next value coming after conn= ,so basically current data in my conn field based on the above raw events looks like - conn connmsg=\ connmsg=\ Failed to connect to the file for \"file_name\" connmsg=\ connmsg=\ Rule 'Conn Web Service' was triggered by Indicator:'Conn Web Service' The "conn" field might contain even more key value pairs , so also wanted to know if there is some dynamic way to capture if any new key value pair pops in conn field other than those specified ? Also along with that, the other key value pairs in conn field is sometimes getting auto extracted and sometime it isn't. I am trying to write Search time field extraction using props and transforms but no luck so far in getting what I want, can someone please help ? Thanks in Advance.
 Has anyone  ever taken a table output in the search and have it create an attachment to the Service Now ticket being created
Tried opening the "Okta Identity Cloud Add-on for Splunk" from UI to check the configuration and settings, but it keeps showing that it's loading, but it doesn't actually load. I checked the "ta_okta... See more...
Tried opening the "Okta Identity Cloud Add-on for Splunk" from UI to check the configuration and settings, but it keeps showing that it's loading, but it doesn't actually load. I checked the "ta_okta_identity_cloud_for_splunk_okta_identity_cloud.log" file from CLI and here is what it returned. >>>>>tail -f ta_okta_identity_cloud_for_splunk_okta_identity_cloud.log<<<<< File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/packages/requests/api.py", line 53, in request return session.request(method=method, url=url, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/packages/requests/sessions.py", line 468, in request resp = self.send(prep, **send_kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/packages/requests/sessions.py", line 576, in send r = adapter.send(request, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/packages/requests/adapters.py", line 447, in send raise SSLError(e, request=request) SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:741) 2021-09-09 11:55:38,725 INFO pid=25100 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2021-09-09 11:55:38,733 ERROR pid=25100 tid=MainThread file=splunk_rest_client.py:request:144 | Failed to issue http request=GET to url=https://127.0.0.1:8089/servicesNS/nobody/TA-Okta_Identity_Cloud_for_Splunk/TA_Okta_Identity_Cloud_for_Splunk_account?output_mode=json&--cred--=1&count=0, error=Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/splunk_rest_client.py", line 140, in request verify=verify, proxies=proxies, cert=cert, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/packages/requests/api.py", line 53, in request return session.request(method=method, url=url, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/packages/requests/sessions.py", line 468, in request resp = self.send(prep, **send_kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/packages/requests/sessions.py", line 576, in send r = adapter.send(request, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/solnlib/packages/requests/adapters.py", line 447, in send raise SSLError(e, request=request) SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:741)