All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Dear all, I have the use case that my splunk universal forwarder does not continuously monitor my logs. Because of this nature, I am using batch mode to have the files deleted after ingestion. Now... See more...
Dear all, I have the use case that my splunk universal forwarder does not continuously monitor my logs. Because of this nature, I am using batch mode to have the files deleted after ingestion. Now, I occasionally receive log files which I have already received at an earlier point in time. Problem is: The features crcSalt, initCrcLength etc. are only available in monitor mode. This means that I am not able to benefit from splunks features to prevent duplicate ingestion of the same data. Any help on a solution for this is greatly appreciated.
I have two Splunk Enterprise environments, both at 9.0.2. For users in one environment, search history goes back only two days. For users in the other environment, search history goes back more than ... See more...
I have two Splunk Enterprise environments, both at 9.0.2. For users in one environment, search history goes back only two days. For users in the other environment, search history goes back more than 8 months. Any clue about what could cause that? Both environments are using a single search head. Users are set up the same in each environment. The limits.conf on both search heads is identical. I verified that the user's search history .csv file goes back two days on one and 8 months on the other.
Hello Splunkers!! We have a dashboard which works on the loadjob. When users try accessing the dashboard, they are getting "No results found" message. First I thought problem with permissions, but... See more...
Hello Splunkers!! We have a dashboard which works on the loadjob. When users try accessing the dashboard, they are getting "No results found" message. First I thought problem with permissions, but out of 4 colleagues with same admin access as mine, 3 members are able to see the dashboard results. So it seems it is not problem with permissions. To figure out the problem in query, we back traced the logic line by line and found the line from where user is not getting 0 results. Search Query: |loadjob reportname .....some evals & lookups.... |eval valid=if(match(backlog_dates,e_time),"yes","no") | search valid=yes --->no results from this line replaced 'match' with 'like' but still no results tried the below line but same issue. | where backlog_dates like e_time Checked the logs for both users who are able to get results and who are not able to get results. But nothing to suspect and no errors in log. It is very strange that it is working for some users. Please help me on figuring out the issue. Below is the sample data
Hi I am sending windows system and security data to splunk cloud. Data is collected using UF and forwarded to cloud through HF. I want to get rid of extra text in windows data (example:4624). I s... See more...
Hi I am sending windows system and security data to splunk cloud. Data is collected using UF and forwarded to cloud through HF. I want to get rid of extra text in windows data (example:4624). I saw SED command stanzas are there in the documentation. I tried to place them in the sourcetype on splunk cloud, but it is not working. But same is working for my onprem indexer. Not sure what is wrong. Any suggestions would be appreciated.
Hi, I have a string in splunk logs something like below. msg.message="Matches Logs :: Logger{clientId='hFKfFkF-K7jlp5epzCnZASazoYmXxgUzBLQ8cixb7f23afb8', apiName='Matches', apiStatus='Success', e... See more...
Hi, I have a string in splunk logs something like below. msg.message="Matches Logs :: Logger{clientId='hFKfFkF-K7jlp5epzCnZASazoYmXxgUzBLQ8cixb7f23afb8', apiName='Matches', apiStatus='Success', error='NA', locationIdMerchDetail=[6d65fcb6-8885-4f56-93c1-7050c8bef906 :: QUALITY COLLISION 1 LLC :: 1, e5ff5b47-839c-4ed0-86a3-87fc18f4bfda :: P A JOLLY'S LLC :: 2, 2053428f-f6ba-4038-a03e-4dbc8737c37d :: CREATIVE EXCELLENCE SALON LLC :: 3, c3e9e6fc-8388-49fd-ba7b-3b9d76f5f9ea :: QUALITY SERVICES AND APP :: 4, 75ca5712-f7a1-4a63-a69f-d73c8e7d187b :: FREEDOM COMICS LLC :: 5, e87a96e8-de73-47f8-bfbd-6099c83376f7 :: S AND G STORES LLC :: 6, 732f9d61-3916-4664-9601-dd0745b68837 :: QUALITY RESALE :: 7, d666bef7-e2fa-498f-a74f-e80f6d2701e7 :: CAKE ART SUPPLIES LLC :: 8, 23ca4856-5908-4bd6-b90d-cace07036b05 :: INTUIT PAYMENT SOLUTIONS, LLC :: 9, b583405f-bb3d-4dba-9bb3-ee9b3713b8f7 :: LA FIESTA TOLEDO LLC :: 10], numReturnedMatches='10'}" My string contains locationIdMerchDetail as highlighted above. I need to extract locationId, rank into table first item being locationid and last item being rank in every comma separated item. Ex: In 6d65fcb6-8885-4f56-93c1-7050c8bef906 :: QUALITY COLLISION 1 LLC :: 1 locationId : 6d65fcb6-8885-4f56-93c1-7050c8bef906 rank: 1 I am able to extract locationIds into table, using below query, but not sure how to include corresponding rank ###################################### index=app_pcf AND cf_app_name="credit-analytics-api" AND message_type=OUT AND msg.logger=c.m.c.d.MatchesApiDelegateImpl | rex field=msg.message "(?<LocationId>[0-9a-f]{8}-([0-9a-f]{4}\-){3}[0-9a-f]{12})" | table LocationId ###################################### I want a table something like below. LocationId rank 6d65fcb6-8885-4f56-93c1-7050c8bef906 1 e5ff5b47-839c-4ed0-86a3-87fc18f4bfda 2 2053428f-f6ba-4038-a03e-4dbc8737c37d 3 .............................................................................................................. and so on Any regex to filter these into table. Please help.
we are using Splunk React. may I have a sample Splunk React code that queries Splunk data, please?
index="*dockerlogs*" source="*gps-request-processor-test*" OR source="*gps-external-processor-test*" OR source="*gps-artifact-processor-test*" event="*Request" | eval LabelType=coalesce(labelType, do... See more...
index="*dockerlogs*" source="*gps-request-processor-test*" OR source="*gps-external-processor-test*" OR source="*gps-artifact-processor-test*" event="*Request" | eval LabelType=coalesce(labelType, documentType) | eval event = case (like(event,"%Sync%"),"Sync",like(event,"%Async%"),"Async") | stats count(eval(status="Received")) as received count(eval(status="Failed")) as failed by sourceNodeCode geoCode LabelType event where as the source : - is my application name event :- Type of request whether synchronous request or Asynchronous request labeltype : - Different type of label sourcenodecode and geocode :- is the shopcode and shopregion from where the label is requested received - no of label request received failed - no of label request failed Now i want to find the received and failed request count based on sourceNodeCode, geoCode, LabelType, event But for failed request count i want to add condition - in case of synchronous request or event the failed count should fetch from '*gps-request-processor-test*' application in case of asynchronous request or event the failed count should fetch from "*gps-external-processor-test*" OR "*gps-artifact-processor-test*" application The output should look something similar to this attached o/p.
I want to match one field value with other field values. If Value in btc field is present in NEB_Sales_Oppy_Business_Type I should get True otherwise False. I tried with the following query: | eval ... See more...
I want to match one field value with other field values. If Value in btc field is present in NEB_Sales_Oppy_Business_Type I should get True otherwise False. I tried with the following query: | eval Is_businees_type_matching=if(match(NEB_Sales_Oppy_Business_Type, btc), "TRUE", "FALSE") Why I am getting False for 3 rows even the value is available in both fields.
HI Splunkers, We are getting below value inside one of field "data" in tabular format: Source success Total_Count 0 abc.csv True 200 1 some_string_1 False 34 2 some_string_2 True 12 3 some_st... See more...
HI Splunkers, We are getting below value inside one of field "data" in tabular format: Source success Total_Count 0 abc.csv True 200 1 some_string_1 False 34 2 some_string_2 True 12 3 some_string_3 False 4 4 some_string_4 True 63 5 some_string_5 False 2 6 some_string_6 True 108 Can we extract these values in different fields. Thank you in advance for your reply
Hi Team I have question that, every month EUM licenses are refreshing and i can see valid from and to are changing with info. May i know will utilized data will reflect the same like for example,. ... See more...
Hi Team I have question that, every month EUM licenses are refreshing and i can see valid from and to are changing with info. May i know will utilized data will reflect the same like for example,. Jan -2022 i see that utilized pageview count is 400 Feb -2022 i see that utilized pageview count is 700 Mar -2022 i see that utilized pageview count is 1100 in this above scenerio, the total utilized value is 1100 pageview. so each month utilized value will remain same and new view will be added. as per my knowledge and understanding, jan month - 400 views, feb month - 300 views, and Mar month - 400 views. it will not refresh the utilized and start from scratch. Please correct me if i am wrong. thanks Jaganathan
Hi Team I have installed trial version of Splunk enterprise. It worked fine for 2 days . After that I am not able to access the Splunk url. It is giving the below error. Please help on the same T... See more...
Hi Team I have installed trial version of Splunk enterprise. It worked fine for 2 days . After that I am not able to access the Splunk url. It is giving the below error. Please help on the same This site can’t be reached 127.0.0.1 refused to connect.
I'm wanting to group streamstats results by either one or two fields. Grouping by sourcetype would be sufficient. Grouping by index and sourcetype would be ideal. This query works fine for a singl... See more...
I'm wanting to group streamstats results by either one or two fields. Grouping by sourcetype would be sufficient. Grouping by index and sourcetype would be ideal. This query works fine for a single sourcetype, however does not work for multiple sourcetypes. The desired outcome is one record per unique sourcetype and/or index. Example query: | tstats count as event_count where index="aws_p" sourcetype="aws:cloudwatch:guardduty" by _time span=1m index sourcetype | sort _time | streamstats window=1 current=false sum(event_count) as event_count values(_time) as prev_time by index sourcetype | eval duration=_time-prev_time | eval minutes_between_events=duration/60 | stats min(minutes_between_events) as min_minutes_between_events avg(minutes_between_events) as avg_minutes_between_events max(minutes_between_events) as max_minutes_between_events by index sourcetype | eval avg_minutes_between_events=round(avg_minutes_between_events,0) | eval max_hours_between_events=round(max_minutes_between_events/60,2) results for multiple sourcetypes results for a single sourcetype
Hi, I have a distributed on Prem Splunk Enterprise Deployment at 8.1.x. Splunk is running under Systemd. I recently noticed that the previous admin did not tune up the ulimits. I was wondering if... See more...
Hi, I have a distributed on Prem Splunk Enterprise Deployment at 8.1.x. Splunk is running under Systemd. I recently noticed that the previous admin did not tune up the ulimits. I was wondering if anyone knew how to tune these settings based on individual host role/hw. For instance if I override with "systemctl edit Splunkd.service" command [Service] LimitNOFILE=65535 < tech support suggestion LimitNPROC=20480 < tech support suggestion LimitDATA=(80% total RAM?)< tech support suggestion LimitFSIZE=infinity < tech support suggestion TasksMax=20480 <mirrored LimitNPROC per tech support I have read the docs and seen the default suggestions for NOFILE and NPROC but how do you determine the other limits? Specifically NPROC and DATA? ref: >>> https://docs.splunk.com/Documentation/Splunk/9.0.2/Installation/Systemrequirements#Considerations_regarding_system-wide_resource_limits_on_.2Anix_systems Thank you!
Hello, I've the following tabular formatted data: How can I achieve the following: Thanks in advance for your help. @ITWhisperer
Hello everyone! I have basic search index=main | stats list(src.port), list(dst.port) count(src.ip) as COUNT by id How can I apply mvdedup (or stats dc, don't know how better) to field dst.por... See more...
Hello everyone! I have basic search index=main | stats list(src.port), list(dst.port) count(src.ip) as COUNT by id How can I apply mvdedup (or stats dc, don't know how better) to field dst.port only if numbers of dst ports = field COUNT?
In the automated e-mails send for Splunk OnDemand entitlements, the link provided that's hyperlinked as "support portal" takes you to an outdated site: https://legacylogin.splunk.com/
Hello Are you okay? Can you help me, I'm trying to configure the Deployer to send the Apps to the SH's but I'm getting an error when executing the command: Command: ./splunk apply shcluster-bundle... See more...
Hello Are you okay? Can you help me, I'm trying to configure the Deployer to send the Apps to the SH's but I'm getting an error when executing the command: Command: ./splunk apply shcluster-bundle -action stage --answer-yes Error: Error in pre-deploy check, uri=?/services/shcluster/captain/kvstore-upgrade/status, status=502, error=Cannot resolve hostname Tks!
Hello, im getting this error while im trying to register a linux server client- ssl3_read_bytes:tlsv1 alert unknown ca - please check the output of the openssl verifycommand for the certificates in... See more...
Hello, im getting this error while im trying to register a linux server client- ssl3_read_bytes:tlsv1 alert unknown ca - please check the output of the openssl verifycommand for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to true, the CA certificate and the server certificate should not have the same Common Name. Someone knows how to solve it?
Hi, I need a help in creating a daily csv export to a file from a data set for 24 hrs . I have a data set under Search & Reporting >>Datasets >>my dump report. now when i click on the my dump repor... See more...
Hi, I need a help in creating a daily csv export to a file from a data set for 24 hrs . I have a data set under Search & Reporting >>Datasets >>my dump report. now when i click on the my dump report it gives me report for any specific condition / time / period what ever i like and then i am able to download the same as csv file . I need help to create a daily automatic job of the same so that after every 24 hrs ( each day from 00:00:00 hrs to 23:59:00 hrs ) the report is created and saved / exported to a specific folder / disk with date_Month_year wise folder / file . Kindly help me / guide me for the same . i am trying on splunk installed on windows.
Hello. I'm trying to identify a pool of windows hosts by adding an additional field to the events they forward. I can do this by adding an inputs.conf in /Splunk_home/etc/system/local and this works.... See more...
Hello. I'm trying to identify a pool of windows hosts by adding an additional field to the events they forward. I can do this by adding an inputs.conf in /Splunk_home/etc/system/local and this works. My metadata field is called uf_deployment::remote_laptop(see below). [monitor://C:\Windows\System32\winevt\Logs\Application.evtx] index = my_index disabled = 0 sourcetype = XmlWinEventLog _meta = uf_deployment::remote_laptop However, the only way I can see how to do this is by monitoring a log file and using the [monitor] tag. But this presents a problem; I don't want to forward events from a log that I'm not interested in, nor do I want to duplicate events. I'm looking for a solution that will allow me to send the _meta = uf_deployment::remote_laptop field without having to "monitor" a log file. So far I have tried [default] with no success(see below) Any help is appreciated. Thank you. [default] index = my_index disabled = 0 sourcetype = XmlWinEventLog _meta = uf_deployment::remote_laptop