All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am trying to filter events based on a lookup table with a time range. My lookup table looks like this:  startDay startTime endDay endTime Saturday 20:00 Tuesday 08:00        ... See more...
Hi, I am trying to filter events based on a lookup table with a time range. My lookup table looks like this:  startDay startTime endDay endTime Saturday 20:00 Tuesday 08:00         With this lookup, it should remove all events from Saturday 8PM until Tuesday 8AM. How do i create this query
We have a large number of Forwarders and would like to optimize the metrics data sent from them to the internal index. The main goal is to have the a reasonable size of the index and still have enou... See more...
We have a large number of Forwarders and would like to optimize the metrics data sent from them to the internal index. The main goal is to have the a reasonable size of the index and still have enough data to search. Is there a way to aggregate increase the sampling rate ?  There is a setting in limits.conf  [metrics] interval = 30 masxeries = 10 increasing the pooling interval between samples from 30 seconds to lets say to 90 would decrease sampling and save some storage, right? thansk for any hint.        
i am trying to parse MS-Exchange http_proxy logs with below setup in props & transforms but this doesnt seem to be working inputs.conf UF- [monitor://D:\Program Files\Microsoft\Exchange Server\... See more...
i am trying to parse MS-Exchange http_proxy logs with below setup in props & transforms but this doesnt seem to be working inputs.conf UF- [monitor://D:\Program Files\Microsoft\Exchange Server\V15\Logging\HttpProxy\*\*.LOG] disabled=0 recusrive=true index= exchange_index sourcetype= exchange_httpproxy ignoreOlderThan = 0d Props and transforms on SH [exchange_httpproxy] REPORT-extractfields = extractfields [extractfields] DELIMS="," FIELDS=DateTime,RequestId,MajorVersion,MinorVersion,BuildVersion,RevisionVersion,ClientRequestId,Protocol,UrlHost,UrlStem,ProtocolAction,AuthenticationType,IsAuthenticated,AuthenticatedUser,Organization,AnchorMailbox,UserAgent,ClientIpAddress,ServerHostName,HttpStatus,BackEndStatus,ErrorCode,Method,ProxyAction,TargetServer,TargetServerVersion,RoutingType,RoutingHint,BackEndCookie,ServerLocatorHost,ServerLocatorLatency,RequestBytes,ResponseBytes,TargetOutstandingRequests,AuthModulePerfContext,HttpPipelineLatency,CalculateTargetBackEndLatency,GlsLatencyBreakup,TotalGlsLatency,AccountForestLatencyBreakup,TotalAccountForestLatency,ResourceForestLatencyBreakup,TotalResourceForestLatency,ADLatency,SharedCacheLatencyBreakup,TotalSharedCacheLatency,ActivityContextLifeTime,ModuleToHandlerSwitchingLatency,ClientReqStreamLatency,BackendReqInitLatency,BackendReqStreamLatency,BackendProcessingLatency,BackendRespInitLatency,BackendRespStreamLatency,ClientRespStreamLatency,KerberosAuthHeaderLatency,HandlerCompletionLatency,RequestHandlerLatency,HandlerToModuleSwitchingLatency,ProxyTime,CoreLatency,RoutingLatency,HttpProxyOverhead,TotalRequestTime,RouteRefresherLatency,UrlQuery,BackEndGenericInfo,GenericInfo,GenericErrors,EdgeTraceId,DatabaseGuid,UserADObjectGuid,PartitionEndpointLookupLatency,RoutingStatus  
Hi, I am trying to do a Lookup with a calculated field. Details: I have a csv containing three coloumns: DomainName,ThreatName,Date And my base search has a field "DomainName" which contains doma... See more...
Hi, I am trying to do a Lookup with a calculated field. Details: I have a csv containing three coloumns: DomainName,ThreatName,Date And my base search has a field "DomainName" which contains domains with "www." appended in some of the domains results.  So I formulated my search like: base search | eval calcDomainName = replace(DomainName,"www\.", "") | lookup iocs_domains DomainName as calcDomainName OUTPUT ThreatName, Date | table xalcDomainName ThreatName Date In my Lookup Definition, I have put "no_match" as my default. However when searched with above, I dont get any fields like "ThreatName", "Date" in my output. My Lookup is uploaded in search app and permissions are read for everyone. I am also searching the same under Search App only. And I can view contents of my csv with below command under Search & Reporting App: | inputlookup iocs_domains I even verified order of processing, in which calculated field preceeds Lookup.  Unable to understand what am I doing wrong.
So this search... index="myindex" source="/data/logs/log.json" "Calculation Complete" ... the results return a MessageBody field which has various different strings in.  I need to do the most simpl... See more...
So this search... index="myindex" source="/data/logs/log.json" "Calculation Complete" ... the results return a MessageBody field which has various different strings in.  I need to do the most simple regex in the world (*my string) and then want to count the messages which match that string eventually charting them.  I thought this would work, but it just returns 0 for them all. index="myindex" source="/data/logs/log.json" "Calculation Complete" | stats | count(eval(MessageBody="*my string")) as My_String | count(eval(MessageBody="*your string")) as Your_String | count(eval(MessageBody="*other string")) as Other_String  Help
Hello, I'm trying to make a report to count the number of interfaces available and used. I found the query that matches my need. index=centreon check_command="Cisco-SNMP-Interfaces-Global-Status" ... See more...
Hello, I'm trying to make a report to count the number of interfaces available and used. I found the query that matches my need. index=centreon check_command="Cisco-SNMP-Interfaces-Global-Status" service_description="Status_All-Interfaces" src_interface!="Ethernet*.*" src_interface!="Vlan*" src_interface!="mgmt*" src_interface!="port*" src_interface!="Null*" src_interface!="loopback*" | rex field=host "ZSE-(?<loc>\w+)-(?<room>\w+).*" | replace "1H" WITH "UC1"| replace "1E" WITH "UC1"| replace "2H" WITH "UC2"|replace "2F" WITH "UC2"| replace "6E" WITH "C6"| replace "6F" WITH "C6"| replace "6T" WITH "C6"| replace "4B" WITH "C4"| replace "4T" WITH "C4"| replace "4E" WITH "C4" | eval site=loc+"-"+room | stats count(src_interface) as tot_int by site | appendcols [search index=centreon check_command="Cisco-SNMP-Interfaces-Global-Status" service_description="Status_All-Interfaces" src_interface!="Ethernet*.*" src_interface!="Vlan*" src_interface!="mgmt*" src_interface!="port*" src_interface!="Null*" src_interface!="loopback*" state_interface="up" | rex field=host "ZSE-(?<loc>\w+)-(?<room>\w+).*" | replace "1H" WITH "UC1"| replace "1E" WITH "UC1"|replace "2H" WITH "UC2"|replace "2F" WITH "UC2"| replace "6E" WITH "C6"| replace "6F" WITH "C6"| replace "6T" WITH "C6"| replace "4B" WITH "C4"| replace "4T" WITH "C4"| replace "4E" WITH "C4" | eval site=loc+"-"+room | stats count(state_interface) as tot_up by site] | eval tot_free=tot_int-tot_up My concern is the frequency of data reception by SPLUNK is not stable (plus or minus 10 minutes) How to make so that my dream is based only on the last events received? Thank's      
Hi Guyz, We have SAP Soloman 7.2, We are looking forward to integrate it with Appdynamics for dashboarding purpose. Can anyone please assist over this? Regards, Ash
Hi team,   We are upgrading splunk version from 7.3.6 to 8.1.X. As per procudure first we have upgraded version on cluster master and trying to login their but could not logged-in, like its not ac... See more...
Hi team,   We are upgrading splunk version from 7.3.6 to 8.1.X. As per procudure first we have upgraded version on cluster master and trying to login their but could not logged-in, like its not accepting the credentials.  1. Shall we upgrade entire infra first and then check. 2. Do we need to first get fix this login issue and then go for the next step. Kindly suggest.   Thanks & Regards, Abhijeet B. 
Hi, recently I deploy the Splunk connect for Syslog in docker and my first candidate to use it was our Citrix ADC VPX. Following the instructions here https://splunk.github.io/splunk-connect-for-sys... See more...
Hi, recently I deploy the Splunk connect for Syslog in docker and my first candidate to use it was our Citrix ADC VPX. Following the instructions here https://splunk.github.io/splunk-connect-for-syslog/main/sources/Citrix/ I see the logs correctly flowing into splunk. now it is time to take some useful alerts out of it. I thought about something very basic to start with: - Detect when a failover between the two Citrix happens. - Detect when a virtual server is UP but a node of the load balancing group got down - Detect when a virtual server is completely down, all nodes got down. I am diving in the events trying to get some meaning out of them without much luck. so far I identified few fields but nothing that makes much sense. Has someone any additional information regarding the logs that I could reuse somehow? maybe some queries on which I could based on ?   Thanks a lot.
Hello, I'd like to use some other shapefile or KML/Z geo files. Another team in my organisation is tasked with the job to maintain and publish it on WFS or WMS endpoints. I couldn't find any docum... See more...
Hello, I'd like to use some other shapefile or KML/Z geo files. Another team in my organisation is tasked with the job to maintain and publish it on WFS or WMS endpoints. I couldn't find any documentation to help me to connect to geo files outside of splunk environment. Updating the lookup file is ok for testing but not in a production environment. Could you get me some pointers ? Is a WFS connexion possible ? Is there a doc I missed ? Thanks, Eglantine
Could someone please explain what are the scenarios where having a data-model would be important rather than using Reports ?   Until now i have been using scheduled reports to prepare data to be us... See more...
Could someone please explain what are the scenarios where having a data-model would be important rather than using Reports ?   Until now i have been using scheduled reports to prepare data to be used in dashboard visuals but came across data models and am not able to understand the point since a reporting mechanism is already available.
Hi all, I have 2 dashboards. The first dashboard has a table with 6 columns. Currently i am passing a value from a column "name" as a drill down token to another dashboard. There is column "URL" of ... See more...
Hi all, I have 2 dashboards. The first dashboard has a table with 6 columns. Currently i am passing a value from a column "name" as a drill down token to another dashboard. There is column "URL" of the "name". I want to pass the value of URL of the selected name as token to be used in the second dashboard. I don't know how to pass the URL of the selected name to the second dashboard. Tried setting the URL like this. But its not working. <drilldown> <link target="_blank">/app/abcd/dashboard1?name=$row.PName$</link> <set token="tok_url">$row.URL$</set> </drilldown> Can anyone help me in this..
Hi, i want to extract bytes fields (using the bytes values) from this: Sep 23 14:11:52 XXX.XXX.X.XX date=2021-09-23 time=14:11:52.004 device_id=FE-3KET123 log_id=6716781232 type=event subtype=smtp p... See more...
Hi, i want to extract bytes fields (using the bytes values) from this: Sep 23 14:11:52 XXX.XXX.X.XX date=2021-09-23 time=14:11:52.004 device_id=FE-3KET123 log_id=6716781232 type=event subtype=smtp pri=information user=mail ui=mail action=NONE status=N/A session_id="47K0CjSc111111-47K0CjSc111111" msg="to=<XXXXXXXX@hotmail.com>, delay=00:00:04, xdelay=00:00:04, mailer=esmtp, pri=61772, relay=hotmail-com.olc.protection.outlook.com. [XXX.XX.XX.XXX], dsn=2.0.0, stat=Sent (<d97263bhagstbhbhet7c01f54636vfd37@GGP0HSDVVHHA9.XXX.XXX.XXX> [InternalId=32836723661134, Hostname=XXXXXXXXXX.namXXXX.prod.outlook.com] 71422 bytes in 0.303, 229.746 KB/sec Queued mail for delivery -> 250 2.1.5)" I've already found the regex -    (?<bxmt>\d+) bytes But it didnt seem to work fine. Can anyone help?
Hi Community team, I have an issue whenever I enable the this add-on on my Search Head with this below error, Problem replicating config (bundle) to search peer ' X.X.X.X:8089 ', Upload bundle="E:\S... See more...
Hi Community team, I have an issue whenever I enable the this add-on on my Search Head with this below error, Problem replicating config (bundle) to search peer ' X.X.X.X:8089 ', Upload bundle="E:\Splunk\var\run\SPL-SH2-1630562214.bundle" to peer name=SPL-Ind3 uri=https://X.X.X.X:8089 failed; http_status=400 http_description="Failed to untar the bundle="E:\Splunk\var\run\searchpeers\SPL-SH2-1630562214.bundle". This could be due Search Head attempting to upload the same bundle again after a timeout. Check for sendRcvTimeout message in splund.log, consider increasing it.". Health Check: One or more apps ("TA-microsoft-graph-security-add-on-for-splunk") that had previously been imported are not exporting configurations globally to system. Configuration objects not exported to system will be unavailable in Enterprise Security. Note: we had increased sendRcvTimeout in distsearch.conf at both SH to 900 as per our requirement need. We are using Splunk Enterprise 8.0.5 on premise with 2 SH (1 with ES), 3 IDX, 1 Deployment/MC, 1 LM, 1 HF Anyone ever experiencing this issue or successfully installed and use the add-on in your environment?.. Appreciate the feedback, thanks  
Hi, I have the below log entry, can you help with the regex to extract the line in Red. The regex i have is not working properly in props.conf   2021-09-23 19:03:40.802 INFO 1 --- [sdgfsdgsdfgsdfg... See more...
Hi, I have the below log entry, can you help with the regex to extract the line in Red. The regex i have is not working properly in props.conf   2021-09-23 19:03:40.802 INFO 1 --- [sdgfsdgsdfgsdfg] asdfasdfasdfasfasfgfdhdfhdf : Response --> { "claimId" : asfdasdfadf, "claimFilerId" : "sadfasdf", "vendorName" : "asfasfadfadf. ", "vendorId" : "aefadf", "vendorAddressId" : "asfafsd", "vendorAddress" : "sdfgsdgsfg", "preparedDate" : "09-22-2021", "receivedDate" : "09-22-2021", "enteredDate" : "09-22-2021", "assignedTo" : { "employeeId" : "sdfasdf ", "firstName" : "asfasf", "lastName" : "zsdfdf", "adUserIdentifier" : "zsdfvzdv" }, "correspondence" : { "type" : { "code" : 5947, "shortName" : "EOB", "longName" : "EOB" }, "dispatchCode" : { "code" : 5947, "shortName" : "NtRqd", "longName" : "Not Required" }, "emailAddress" : "abcd@g.com,       dgfh@a.in" }
I want to count up my aws resources by region, and show it like a heatmap. How do I table it in a way where my account IDs are going down the left, the aws regions are going across the top, and the t... See more...
I want to count up my aws resources by region, and show it like a heatmap. How do I table it in a way where my account IDs are going down the left, the aws regions are going across the top, and the table is the count of of events for the specific combination? There are some heatmap apps but they require time to be on the x axis which is not what I want here.
Hi all, I'm trying hard to add data into Splunk from a .csv file instead of .json. I managed to convert it from .json to .csv and now, when i try to alter <Timestamp format > using strptime() is s... See more...
Hi all, I'm trying hard to add data into Splunk from a .csv file instead of .json. I managed to convert it from .json to .csv and now, when i try to alter <Timestamp format > using strptime() is showing me time from the adding time, not the time from the field time inside the .csv that is in Epoch Unix Timestamp. I have read this resource,  https://docs.splunk.com/Documentation/SplunkCloud/8.2.2107/Data/Configuretimestamprecognition but to no avail ... Please advice ...  
I got the following error when a setting a data input in DB Connect -     java.lang.NullPointerException at java.net.URLDecoder.decode(Unknown Source) at com.splunk.dbx.utils.PercentEncodingQue... See more...
I got the following error when a setting a data input in DB Connect -     java.lang.NullPointerException at java.net.URLDecoder.decode(Unknown Source) at com.splunk.dbx.utils.PercentEncodingQueryDecoder.decode(PercentEncodingQueryDecoder.java:20) at com.splunk.dbx.command.DbxQueryCommand.getParams(DbxQueryCommand.java:274) at com.splunk.dbx.command.DbxQueryCommand.generate(DbxQueryCommand.java:350) at com.splunk.search.command.GeneratingCommand.process(GeneratingCommand.java:183) at com.splunk.search.command.ChunkedCommandDriver.execute(ChunkedCommandDriver.java:109) at com.splunk.search.command.AbstractSearchCommand.run(AbstractSearchCommand.java:50) at com.splunk.search.command.GeneratingCommand.run(GeneratingCommand.java:15) at com.splunk.dbx.command.DbxQueryCommand.runCommand(DbxQueryCommand.java:256) at com.splunk.dbx.command.DbxQueryServer.lambda$handleQuery$1(DbxQueryServer.java:144) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source)       Has anybody seen it before?
I have a log as a below cod:5678,status:600 cod:9012,staus:600 cod:1234,status:600 cod: 1234,status:900 cod:4987,status:600 cod:4987,status:900 cod:3655,status:600 cod:3655,status:900 I ... See more...
I have a log as a below cod:5678,status:600 cod:9012,staus:600 cod:1234,status:600 cod: 1234,status:900 cod:4987,status:600 cod:4987,status:900 cod:3655,status:600 cod:3655,status:900 I need a query that give me this result cod status1 status2 1234 600 900 5678 600   9012 600   4987 600 900 3655 600 900   how can i write a query for this? Thanks
Issue I'm facing: My use case is to detect a successful ssh login from an external ip_address. I have my linux logs in: index=linux_logs These logs have a field called "hostname". "hostname" is so... See more...
Issue I'm facing: My use case is to detect a successful ssh login from an external ip_address. I have my linux logs in: index=linux_logs These logs have a field called "hostname". "hostname" is sometimes a FQDN and sometimes it's an ip_address. I have an asset list (lookup file),  assets.csv.  Not all of the FQDN from the linux_logs are in this list. Here is my initial query: index=linux_logs sourcetype=syslog exe="/usr/sbin/sshd" res=success NOT hostname=? | stats count, min(_time) as first_time, max(_time) as last_time, values(dest) as dest, values(hostname) as src by acct | lookup assets.csv dns AS src OUTPUT ip | fillnull value=no_ip ip   A sample of the results: acct count first_time last_time dest hostname ip user1 50 epoch_time_format epoch_time_format host1.mycompany.com src1.mycompany.com 10.36.25.14 user2 40 epoch_time_format epoch_time_format host3.mycompany.com src3.mycompany.com no_ip    I want to eliminate the RFC1918 and keep the "no_ip" and ip's outside of the RFC1918 ranges. I do have a lookup for the rfc1918 ranges but I'm struggling with how to write the spl to check the "ip" field for what I need. Any help is greatly appreciated.