All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,  I am trying to extract the data from the following message: the header data is in quotes and for each header data there is a set of secondary data also in quotes. The events are presented ... See more...
Hello,  I am trying to extract the data from the following message: the header data is in quotes and for each header data there is a set of secondary data also in quotes. The events are presented as follows: {Name=SS, PId=236} PROD {Type=A_OUTGOING, Id=7934,plan=8975, Conflict=2529, Date=2023-04-18T18:51:00.000+02:00} PROD {Type=B_OUTGOING, Id=7934, plan=8975, Conflict=72482, Date=2023-04-18T18:51:00.000+02:00} {Name=DAG, PId=55} PROD {Type=B_INCOMING, Id=7921, plan=8975, Conflict=64870, Date=2023-04-18T18:51:00.000+02:00} The following result is expected: Name   PId  Type  Id  plan Conflict  Date SS 236 A_OUTGOING 7934 8975 2529 2023-04-18T18:51:00.000+02:00 SS 236 B_OUTGOING 7934 8975 72482 2023-04-18T18:51:00.000+02:00 DAG 55 B_INCOMING 7921 8975 64870 2023-04-18T18:51:00.000+02:00   Would you please help? Thanking you
How do you convert .34999832 to 34.99% or .399345 to 39.99% I need to see the .99 and not have it round up  
If the file size in GB's would create any issue in indexing performance?
I am using DB Connect Version:3.6.0 I am trying to execute an Oracle procedure from DB Connect  The Oracle procedure returns a number of REF CURSORS but has no in parameters  create or repla... See more...
I am using DB Connect Version:3.6.0 I am trying to execute an Oracle procedure from DB Connect  The Oracle procedure returns a number of REF CURSORS but has no in parameters  create or replace PROCEDURE GET_SOME_LOG_MONITORING (po_error_log_details_01 out SYS_REFCURSOR, po_error_log_details_02 out SYS_REFCURSOR, po_error_log_details_03 out SYS_REFCURSOR, po_error_log_details_04 out SYS_REFCURSOR, po_error_log_details_05 out SYS_REFCURSOR, po_error_log_details_06 out SYS_REFCURSOR, po_error_log_details_07 out SYS_REFCURSOR, po_error_log_details_08 out SYS_REFCURSOR ) AS ....... However I can't seem to call the procedure from DB Connect successfully  | dbxquery connection="SOME-CONNECTION" procedure="{call SOMESCHEMA.GET_SOME_LOG_MONITORING(?,?,?,?,?,?,?,?) }" However when I run this I get the following error  java.sql.SQLException: Missing IN or OUT parameter at index:: 2 Which suggests I am missing an input parameter but I don't have any input parameter that needs to be passed in. The procedure can be successfully executed from within SQL developer so I know that the procedure works fine. It just seems to be an issue when I try to call this from Splunk DB Connect. Also how would I go about getting this set up as an input in Spunk DB Connect so that it executes the procedure every 60mins and then anything that is returned is indexed so that I can create an alert on the back of this.
Hi, I regularly have the problem, that I save searches containing regexes with $ characters to a dashboard where they are then not showing any result. I guess I have to escape them somehow. It seem... See more...
Hi, I regularly have the problem, that I save searches containing regexes with $ characters to a dashboard where they are then not showing any result. I guess I have to escape them somehow. It seems while saving them the $ characters are automatically duplicated but it that is supposed to be some kind of escaping its not working. I could not find anything in the documentation but since there are plenty I was not shure where to look exactly. Can anyone tell me how to reliably use a | rex command in a dashboard? | rex field=_raw "\s(?<hash>\S+)$"
Is it possible to send logs to S3 from a heavy forwarder?  I have seen information about being able to ingest from S3.  Please advise.   Any information can help.
Greetings all. I'm relatively new to Splunk and did not see an answer for this particular issue in the KB. Any help is appreciated. I have alerts turned on for missing forwarders and am being no... See more...
Greetings all. I'm relatively new to Splunk and did not see an answer for this particular issue in the KB. Any help is appreciated. I have alerts turned on for missing forwarders and am being notified every 15 minutes that one is missing. After a small amount of investigation, I found that this Windows host has been permanently powered down. I would like to remove this host, not only from alerting, but from Splunk Cloud all together. I DO need to keep its historical data as we are in the Financial Tech industry and our retention policies are auditable. Does anyone know how to remove said host, but keep a record that it was there before removing it? Thank you very much. If there is any more information necessary, I would be happy to provide it.
Hi, I`m looking to export a scheduled report using the REST API but I`m struggling with the syntax. I was able to run a new search inside "curl" and export it, but can`t seem to be able to do the... See more...
Hi, I`m looking to export a scheduled report using the REST API but I`m struggling with the syntax. I was able to run a new search inside "curl" and export it, but can`t seem to be able to do the same for saved reports. Would be grateful if someone could help with the syntax for exporting the following report as a CSV file:     curl -k -H "Authorization: Bearer myValidToken" https://myValidDomain.splunkcloud.com:8089/servicesNS/userName/app/saved/searches/%20Test%20/history      
Does anyone know what is explore data option under settings ?   is there any documentation available ? It seems interesting to have virtual index and search from it. will the licensing impa... See more...
Does anyone know what is explore data option under settings ?   is there any documentation available ? It seems interesting to have virtual index and search from it. will the licensing impact if we create and use a virtual index ?
Below is the sample HTML event  <HTML><BODY><TABLE border="1"><TH style=background-color:#00FFFF>Cluster</TH><TH style=background-color:#00FFFF>BlockSize</TH> <TR bgcolor=#ABEBC6><TD>GCS E1</TD><TD... See more...
Below is the sample HTML event  <HTML><BODY><TABLE border="1"><TH style=background-color:#00FFFF>Cluster</TH><TH style=background-color:#00FFFF>BlockSize</TH> <TR bgcolor=#ABEBC6><TD>GCS E1</TD><TD>41008</TD></TR><TR bgcolor=#ABEBC6><TD>VPay E1</TD><TD>18994</TD></TR><TR bgcolor=#ABEBC6><TD>Cadence E1</TD><TD>35345</TD></TR><TR bgcolor=#ABEBC6><TD>GCODS E1</TD><TD>3312</TD></TR><TR bgcolor=#ABEBC6><TD>EDMS E1</TD><TD>3715</TD></TR><TR bgcolor=#ABEBC6><TD>Nemo E1</TD><TD>3366332</TD></TR></TABLE></BODY></HTML> Need a splunk query to extract above HTML event and output should be below. Cluster BlockSize GCS E1 41008 VPay E1 18994 Cadence E1 35345 GCODS E1 3312 EDMS E1 3715 Nemo E1 3366332   @links
Problem We are having trouble fetching MessageTrace logs using the "Splunk Add-on for Microsoft Office 365 (Version 4.2.1)" app. Every time we configure the input with a specific start_date, it ing... See more...
Problem We are having trouble fetching MessageTrace logs using the "Splunk Add-on for Microsoft Office 365 (Version 4.2.1)" app. Every time we configure the input with a specific start_date, it ingests a block of logs from a start_date until an end_date. However, after ingesting the first block of logs, the script throws an error. The following times it tries to ingest the logs, it uses the same start and end dates. Here are the errors we are seeing: First it collects the logs:   2023-04-18 11:02:21,951 level=INFO pid=22264 tid=MainThread logger=splunk_ta_o365.modinputs.message_trace pos=__init__.py:_get_events_continuous:204 | datainput=b'MessageTraceLogs' start_time=1681812136 | message="Collecting the data between Start date: 2023-04-17 01:00:00, End date: 2023-04-17 02:00:00"   And then it throws an error:   2023-04-18 11:02:24,259 level=ERROR pid=22264 tid=MainThread logger=splunk_ta_o365.modinputs.message_trace pos=__init__.py:_get_messages:253 | datainput=b'MessageTraceLogs' start_time=1681812136 | message="HTTP Request error: 401 Client Error: for url: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2023-04-17T01%3A00%3A00Z'%20and%20EndDate%20eq%20datetime'2023-04-17T02%3A00%3A00Z'&$skiptoken=1999" stack_info=True Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/message_trace/__init__.py", line 246, in _get_messages response.raise_for_status() File "/opt/splunk/etc/apps/splunk_ta_o365/lib/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: for url: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2023-04-17T01%3A00%3A00Z'%20and%20EndDate%20eq%20datetime'2023-04-17T02%3A00%3A00Z'&$skiptoken=1999 2023-04-18 11:02:24,260 level=ERROR pid=22264 tid=MainThread logger=splunk_ta_o365.modinputs.message_trace pos=__init__.py:run:359 | datainput=b'MessageTraceLogs' start_time=1681812136 | message="An error occurred while collecting data" stack_info=True Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/message_trace/__init__.py", line 354, in run self._collect_events(app) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/message_trace/__init__.py", line 144, in _collect_events self._get_events_continuous(app) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/message_trace/__init__.py", line 215, in _get_events_continuous self._process_messages(start_date, end_date) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/message_trace/__init__.py", line 306, in _process_messages message_response = self._get_messages(nextLink) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/message_trace/__init__.py", line 254, in _get_messages return messages UnboundLocalError: local variable 'messages' referenced before assignment 2023-04-18 11:02:24,265 level=INFO pid=22264 tid=MainThread logger=splunksdc.collector pos=collector.py:run:270 | | message="Modular input exited."   Our inputs are configured as follows:   [splunk_ta_o365_message_trace://MessageTraceLogs] delay_throttle = 1440 index = OUR_INDEX input_mode = continuously_monitor interval = 300 query_window_size = 60 start_date_time = 2023-04-17T00:00:00 tenant_name = OUR_TENANT start_by_shell = false disabled = 0   We have been without continuous monitoring for more than a week. Thanks for the Help!
Hi, I have a Zscaler NSS connected to splunk. I made a change in the dns entries so that my em1 (interface that is connected to splunk) has a different name. Ever since then, the logs have been co... See more...
Hi, I have a Zscaler NSS connected to splunk. I made a change in the dns entries so that my em1 (interface that is connected to splunk) has a different name. Ever since then, the logs have been coming out differently. It follows a different format.  How should I troubleshoot this? Thanks!
Hi, I have the following Splunk query: index=ABC sourcetype=DEF dv_assignment_group="SECURITY-NETWORK-L3"  | table _time, description, dv_parent, dv_state, dv_assigned_to | dedup dv_parent | ... See more...
Hi, I have the following Splunk query: index=ABC sourcetype=DEF dv_assignment_group="SECURITY-NETWORK-L3"  | table _time, description, dv_parent, dv_state, dv_assigned_to | dedup dv_parent | appendcols [| inputlookup user_identities.csv | where L6MgrName="John Doe" | where NOT match(businessemail,"(?i)dellteam") | eval copy=mvrange(0,3) | mvexpand copy | eval rnd=random() | sort 0 rnd | fields - copy rnd | rex field=businessemail "(?<businessemail>[^@]+)@[^.]+\.com" | eval businessemail=replace(businessemail, "\.", " ") | search businessemail ="*" | fields businessemail] | eval "Employee to Review"=businessemail, "Time" = _time, "Description" = description, "Ticket Number" = dv_parent, "State" = dv_state, "Employee Assigned To" = dv_assigned_to | where isnotnull(Time) or isnotnull("Ticket Number") | table Time, Description, "Ticket Number", State, "Employee Assigned To", "Employee to Review" However, the part of the query that involves the appendcols function is quiet slow. i.e.: | appendcols [| inputlookup user_identities.csv | where L6MgrName="John Doe" | where NOT match(businessemail,"(?i)dellteam") | eval copy=mvrange(0,3) | mvexpand copy | eval rnd=random() | sort 0 rnd | fields - copy rnd | rex field=businessemail "(?<businessemail>[^@]+)@[^.]+\.com" | eval businessemail=replace(businessemail, "\.", " ") | search businessemail ="*" | fields businessemail] How can I optimise this search to speed it up? Thanks,
We have a need to migrate our phantom data to another instance including the containers.  Though it's not listed in REST Containers - Splunk Documentation, i was able to export the containers via /... See more...
We have a need to migrate our phantom data to another instance including the containers.  Though it's not listed in REST Containers - Splunk Documentation, i was able to export the containers via /rest/container/{id}/export however, i didn't find the rest endpoint for importing the containers. any advice will be appreciated.
Hi All..  Well, this is not a question.. this is i thought to share... I created a github repo for UF upgrade on linux and windows thru DS(cloned from Luke Netto).. As we stumble upon the UF upgrad... See more...
Hi All..  Well, this is not a question.. this is i thought to share... I created a github repo for UF upgrade on linux and windows thru DS(cloned from Luke Netto).. As we stumble upon the UF upgrade task every year and we got many discussions about UF upgrade thru Deployment Server.  From the blog, i got the initial github repo, then, updated/edited it for windows.  https://www.splunk.com/en_us/blog/tips-and-tricks/upgrading-linux-forwarders-using-the-deployment-server.html Could you have a look and suggest your views, thanks.  https://github.com/inventsekar/splunk-uf-upgrade-windows-thru-DS/ https://github.com/inventsekar/splunk-uf-upgrade-linux-thru-DS   Please note that, this repo may have errors, issues.. we need to troubleshoot this on dev/test environment.  thanks. 
Hi folks, Is there a way to encrypt sensitive data in index time and decrypt it in search time in Splunk ? if yes, how can we do this ?
Hi splunk team, I'm Arun from celigo and we build integrations that integrate data between source and destination applications, such as ERP, CRM, and so on. In order to implement the splunk API in o... See more...
Hi splunk team, I'm Arun from celigo and we build integrations that integrate data between source and destination applications, such as ERP, CRM, and so on. In order to implement the splunk API in our Product, we have been having an issue while testing the splunk APIs despite providing the correct credentials and the same credentials are working in our Production account but failing in our Sandbox account. We guess it may be an IP whitelisting error. We would appreciate if you could help us resolve the issue or let me know if we need to whitelist our sandbox IP address.
I can see the pagination in splunk dashboard and also able to set the total. But is there any way to see ALSO pagewise total.  
Below two events  Start event  Index= x source= xtype | spath application | search application= x app " saved note" RCVD | rex field=" actionid"=(?<actionid>.*)", | Rex field =log " manid=(?<mand... See more...
Below two events  Start event  Index= x source= xtype | spath application | search application= x app " saved note" RCVD | rex field=" actionid"=(?<actionid>.*)", | Rex field =log " manid=(?<mandid>.*?)", | Rex field=log "bid=(?<bid>.*"     |  Rex field=  log " state=(?<state>.*" | Table _time bid,mandid,actionid,state   End event  Index=y sourcetype=yytype source=y  "VALIDATION SUCESS" " msg got" | Rex field =msg " manid\:(?<mandid>.*?)", | Rex field=msg "actionid"\:(?<actionid>.*"  | Table _time manid actionid  
There are two types of raw data. What is the regular expression to get the value between the /* special symbol and the */ special symbol in the raw data? I tried this regex but it doesn't work re... See more...
There are two types of raw data. What is the regular expression to get the value between the /* special symbol and the */ special symbol in the raw data? I tried this regex but it doesn't work rex field=query "^[^/\n]*/\*(?P<test>[^\*]+)"   DATA1 SELECT /*4/18 test */ DRTA_SEQ\r\n FROM DATA_REQ_LIST\r\n WHERE DATE < TO_DATE('2023-04-18 06:00:00', 'YYYY-MM-DD HH24:MI:SS')\r\n   DATA2 with my_index as (\n select index_name from ALL_indexes where table_owner = :1 \n /* test select index_name from CHAN_indexes where table_owner = :schema_name and table_name in ( :[*object_names] )\n *//* test select * from Chanlist */   I want this string to be extracted DATA1 4/18 test DATA2 test select index_name from CHAN_indexes where table_owner = :schema_name and table_name in ( :[*object_names] )\n test select * from Chanlist