All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm going to describe a typical use case.   The Software team will have one log file for most of it's outputs, lets call this HOUSES. This data will be generic health status information, transactio... See more...
I'm going to describe a typical use case.   The Software team will have one log file for most of it's outputs, lets call this HOUSES. This data will be generic health status information, transaction information, and some times data payloads in XML and some data in JSON. My practice has been to use a UF to monitor that file. Then on the Indexers i'll use Props to go through that data and set my time, linebreakser, and transforms.  Then in my transforms I use REGEX to match UNIQ fields in the data like, for example, RED-HOUSE. This will grab all data from that original file that contains RED-HOUSE. I then do a sourcetype override and make the new source type HOUSES:RED-HOUSE.  Then on my Searchhead i'll define my props/transforms (Using EXTRACT/REPORT) for field extractions.  I'll try to show a brief example:   Data: lets say sourcetype is LOGS   dcnsnoctads-1 2021/02/12 01:59:59.105 GMT-FANS ADS START PERIODIC PERIODIC TIMER_START .D00002 ​ dcnsnoctads-1 2021/02/12 01:59:59.105 GMT-FANS ADS SEND PERIODIC CONTRACT REQUEST #502, .D00002 ​ dcnsnoctads-1 2021/02/12 01:59:59.105 GMT-FANS ADS PERIODIC PERIODIC CONTRACT TIMER_EXPIRED 0,.D00002   On indexing Tier: Props:   [LOGS] SHOULD_LINEMERGE = False TRANSFORMS-ADS = ADS, EFG, XYZ   Tranforms:   [ADS] REGEX = ADS DEST_KEY = MetaData:sourcetype FORMAT = sourcetype::LOGS:ADS     Search head: props:   [ADS] REPORT-ADS = ADS_EXTRACTIONS   (lets assume several data points use the same extraction, so REPORT should be used  TRANSFORMS:   [ADS_EXTRACTIONS] REGEX = ADS (?P<ADS_Method>\w+\s\w+)\s(?P<ADS_Method_Type>\w+)\s(?P<Method_Message>\w+)(?:.*)(?P<tail_no>.{7}$)       Is this overkill? Or is this taking the right approach? I basically try to make a sourcetype for every differential of field extractions format there is. 
I have a lookup: test.csv that has a list of 10 IP's (src_ip). I want to be able to search a datamodel that  looks for traffic from those 10 IPs in the CSV from the lookup and displays info on the IP... See more...
I have a lookup: test.csv that has a list of 10 IP's (src_ip). I want to be able to search a datamodel that  looks for traffic from those 10 IPs in the CSV from the lookup and displays info on the IPs even if it doesn't match. Currently I have tried:  | tstats count from datamodel=DM where [| inputlookup test.csv | rename src_ip to DM.src_ip | fields DM.src_ip] by DM.src_ip  | rename DM.src_ip AS src_ip | iplocation src_ip | fillnull value="NULL" | table src_ip, Country The issue is that if the IP from the lookup isnt found in the DataModel, it doesn't include that entire line, so instead of 10 IPs with 10 countries, I get maybe 5-6 IPs and their respective countries. I want the DM to always include all 10 IPs from the lookup in the table. I understand that I can just use the lookup to get countries, but I specifically want to have the datamodel available for other data while always including all 10 IPs in the table.
I'm using the Splunk Add-on for Symantec Blue Coat ProxySG add-on.  I'm receiving the logs (from ASG version 6.7.3.14) and seeing most of the data as expected.  The problem is that everything before ... See more...
I'm using the Splunk Add-on for Symantec Blue Coat ProxySG add-on.  I'm receiving the logs (from ASG version 6.7.3.14) and seeing most of the data as expected.  The problem is that everything before "-splunk_format" is getting dropped. If I look at the raw log using "show source" in splunk search, it looks like this: - splunk_format - c-ip=172.16.186.28 cs-bytes=20058 cs-categories="News" cs-host=data.api.cnn.io cs-ip=172.30.50.202 cs-method=CONNECT cs-uri-port=443 cs-uri-scheme=tcp cs-User-Agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:85.0) Gecko/20100101 Firefox/85.0" cs-username=iho dnslookup-time=0 duration=285 rs-status=0 s-action=TCP_TUNNELED s-ip=172.30.50.202 service.name="Explicit HTTP" service.group="Standard" s-supplier-ip=172.30.50.202 s-supplier-name=172.30.50.202 sc-bytes=680861 sc-filter-result=OBSERVED sc-status=200 time-taken=285209 c-url="tcp://data.api.cnn.io:443/" cs-headerlength=213 cs-threat-risk=unavailable r-ip=151.101.53.67 s-connect-type=Direct s-icap-status=ICAP_NOT_SCANNED s-sitename=http.proxy s-source-port=32401 s-supplier-country=Unavailable sr-Accept-Encoding=identity x-cookie-date=Sat,%2013-Feb-21%2016:41:27%20GMT x-cs-connection-negotiated-cipher=none x-exception-category-review-message="<br><br>Your request was categorized by Blue Coat Web Filter as 'News'. <br>If you wish to question or dispute this result, please click <a href=%22http://sitereview.bluecoat.com/sitereview.jsp?referrer=136&url=tcp://data.api.cnn.io:443/%22>here</a>." x-exception-sourceline=0 x-rs-connection-negotiated-cipher=none cs-uri-path=/ c-uri-pathquery=/ If I use tcpdump on the server running sc4s, I see this: <111>1 2021-02-13T16:47:43 ShrSecGatPd01 bluecoat - splunk_format - c-ip=172.16.186.28 rs-Content-Type="-" cs-auth-groups=- cs-bytes=1418 cs-categories="Technology/Internet;Web Ads/Analytics" cs-host=mcdp-sadc1.outbrain.com cs-ip=172.30.50.202 cs-method=CONNECT cs-uri-port=443 cs-uri-scheme=tcp cs-User-Agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:85.0) Gecko/20100101 Firefox/85.0" cs-username=iho dnslookup-time=0 duration=7 rs-status=0 rs-version=- s-action=TCP_TUNNELED s-ip=172.30.50.202 service.name="Explicit HTTP" service.group="Standard" s-supplier-ip=172.30.50.202 s-supplier-name=172.30.50.202 sc-bytes=1850 sc-filter-result=OBSERVED sc-status=200 time-taken=7161 x-exception-id=- x-virus-id=- c-url="tcp://mcdp-sadc1.outbrain.com:443/" cs-Referer="-" c-cpu=- connect-time=- cs-auth-groups=- cs-headerlength=229 cs-threat-risk=unavailable r-ip=66.225.223.159 r-supplier-ip=- rs-time-taken=- rs-server=- s-connect-type=Direct s-icap-status=ICAP_NOT_SCANNED s-sitename=http.proxy s-source-port=55231 s-supplier-country=Unavailable sc-Content-Encoding=- sr-Accept-Encoding=identity x-auth-credential-type=- x-cookie-date=Sat,%2013-Feb-21%2016:47:43%20GMT x-cs-certificate-subject=- x-cs-connection-negotiated-cipher=none x-cs-connection-negotiated-cipher-size=- x-cs-connection-negotiated-ssl-version=- x-cs-ocsp-error=- x-cs-Referer-uri=- x-cs-Referer-uri-address=- x-cs-Referer-uri-extension=- x-cs-Referer-uri-host=- x-cs-Referer-uri-hostname=- x-cs-Referer-uri-path=- x-cs-Referer-uri-pathquery=- x-cs-Referer-uri-port=- x-cs-Referer-uri-query=- x-cs-Referer-uri-scheme=- x-cs-Referer-uri-stem=- x-exception-category=- x-exception-category-review-message="<br><br>Your request was categorized by Blue Coat Web Filter as 'Technology/Internet;Web Ads/Analytics'. <br>If you wish to question or dispute this result, please click <a href=%22http://sitereview.bluecoat.com/sitereview.jsp?referrer=136&url=tcp://mcdp-sadc1.outbrain.com:443/%22>here</a>." x-exception-company-name=- x-exception-contact=- x-exception-details=- x-exception-header=- x-exception-help=- x-exception-last-error=- x-exception-reason="-" x-exception-sourcefile=- x-exception-sourceline=0 x-exception-summary=- x-icap-error-code=- x-rs-certificate-hostname=- x-rs-certificate-hostname-category=- x-rs-certificate-observed-errors=- x-rs-certificate-subject=- x-rs-certificate-validate-status=- x-rs-connection-negotiated-cipher=none x-rs-connection-negotiated-cipher-size=- x-rs-connection-negotiated-ssl-version=- x-rs-ocsp-error=- cs-uri-extension=- cs-uri-path=/ cs-uri-query="-" c-uri-pathquery=/ What do I need to edit to get it to parse out the timestamp and hostname?
How do I get a complete list of all indexers in my Splunk Enterprise environment?
How do I confirm the host name & IP address of a  host I am logged in in Splunk GUI?
Hi, I currently have a search to show IIS success, failures,total,failure success percentage, percentage,failure percentage and AVG. response which I used addcoltotals. The only issue I have is the ... See more...
Hi, I currently have a search to show IIS success, failures,total,failure success percentage, percentage,failure percentage and AVG. response which I used addcoltotals. The only issue I have is the AVG total response does not show the overall AVG and just adds up the AVG response, is there anyway around this?   Thanks   Joe
Below are the HP proxy logs format Where in we see Get and post entries along with the status code and response time in milli seconds.(example- 200 (status code) 5715(is response time in miliseconds)... See more...
Below are the HP proxy logs format Where in we see Get and post entries along with the status code and response time in milli seconds.(example- 200 (status code) 5715(is response time in miliseconds). I like to calculate the average response time in 1 minute interval.     Feb 15 12:19:49 localhost haproxy[7046]: XX.XX.XXX.X:41534 [15/Feb/2021:12:19:49.989] xyz rest_service/rest-hostname-port 0/0/0/6/6 200 5715 - - --VN 73/73/7/0/0 0/0 "GET /filterservices/xx/sadfsfsd HTTP/1.1" Feb 15 12:19:49 localhost haproxy[7046]: XX.XX.XXX.X:50177 [15/Feb/2021:12:19:49.955] xyz rest_service/rest-hostname-port 0/0/0/2/3 200 1541 - - --VN 73/73/7/0/0 0/0 "GET /contentservices/js/feedback_container.js?_=234324255 HTTP/1.1" Feb 15 12:19:49 localhost haproxy[37427]: XX.XX.XXX.X:56769 [15/Feb/2021:12:19:49.655] xyz sserices/servuce.service-hostname 0/0/0/7/9 200 2848 - - ---- 79/79/1/1/0 0/0 "POST /service/service/select HTTP/1.1"
Hello All,   I am trying to find where a user is getting mapped to a role.  I can see that the user is mapped to the power role in the webui, but I do not see the user being mapped there in /opt/sp... See more...
Hello All,   I am trying to find where a user is getting mapped to a role.  I can see that the user is mapped to the power role in the webui, but I do not see the user being mapped there in /opt/splunk/etc/system/local/authentication.conf.  So what am I missing?  Also there is nothing in /opt/splunk/etc/apps/* that would map the user to the power role.     Thoughts? thanks ed
Hello there I am monitoring files using input.conf and define source source type there i am trying to split sourcetype in to multiple sourcetype   inputs.conf [monitor:///opt/splunk/etc/apps/out... See more...
Hello there I am monitoring files using input.conf and define source source type there i am trying to split sourcetype in to multiple sourcetype   inputs.conf [monitor:///opt/splunk/etc/apps/out/bin/out/.../*.gz] disabled=0 index=security_abc_index sourcetype=abd_s3 source=abd interval=60   this props.conf  here i am doing  parsing  [abd_s3] LINE_BREAKER = ""{" NO_BINARY_CHECK = 1 TRUNCATE = 0 SHOULD_LINEMERGE = false TRANSFORMS-splitsourcetype = event1,  event2, event3,  event4   and TRANSFORMS.conf, event2, 3, 4 are having regex which i want to put in source type , everything else which is not matching to regex to event1 [event1] DEST_KEY = MetaData:Sourcetype REGEX = . FORMAT = sourcetype::event1 [event2] DEST_KEY = MetaData:Sourcetype REGEX = \{\"AgentLoadFlags\".* FORMAT = sourcetype::event2 [event3] DEST_KEY = MetaData:Sourcetype REGEX = \{\"GatewayIP\".* FORMAT = sourcetype::event3 [event4] DEST_KEY = MetaData:Sourcetype REGEX= \{\"ComputerName\".* FORMAT = sourcetype::event4   Output in index i am getting in to sourcetype event1  which not macthing to regex which ever matched to regex not getting monitored not even index, am i doing anything wrong
Hi, Initially I was using rising column as datetime but there was some data loss.. i.e. not every data was dumping to Splunk from SQL. So I changed rising column to Id(int) and its working fine on o... See more...
Hi, Initially I was using rising column as datetime but there was some data loss.. i.e. not every data was dumping to Splunk from SQL. So I changed rising column to Id(int) and its working fine on one of my TST environment. But when I am doing the same for Prod, after every run my checkpoint value change to datetime instead of Id although in rising column its showing Id. And due to that no data is coming to Splunk. Please help.
I'm running into an issue where I have multiple artifacts that are being submitted as a Splunk query. Below is my current workflow: Extract domains from URL Format Splunk query as such: '|inputloo... See more...
I'm running into an issue where I have multiple artifacts that are being submitted as a Splunk query. Below is my current workflow: Extract domains from URL Format Splunk query as such: '|inputlookup someCSV.csv | search domain={0}' Run Splunk query The issue lies in the Splunk query that is run appears to be appending the artifacts in a comma delimited list rather than individual queries: query = | inputlookup someCSV.csv | search domain=domain1.com, domain2.com, domain3.com When i'm expecting the following searches to be run: query = | inputlookup someCSV.csv | search domain=domain1.com query = | inputlookup someCSV.csv | search domain=domain2.com query = | inputlookup someCSV.csv | search domain=domain3.com Is there a way to construct this so each domain extracted is run in a separate Splunk query?
Hi everybody, I am asking for the meaning of "the owner context of the service" when i use the method setOwner() on JAVA. 
Hello, I need to create a dashboard panel (table) doing a query that uses the following filtering condition: account_name = "$userNameToken$") How could I prevent the situation where the token is ... See more...
Hello, I need to create a dashboard panel (table) doing a query that uses the following filtering condition: account_name = "$userNameToken$") How could I prevent the situation where the token is not defined and display any user? Thanks
I would like to draw a line from plot to plot on a scatter chart on a dashboard. For example; | makeresults | eval _raw="play_type,xstart,ystart touch_start,1,1 touch_end,10,75 kick_start,35,90... See more...
I would like to draw a line from plot to plot on a scatter chart on a dashboard. For example; | makeresults | eval _raw="play_type,xstart,ystart touch_start,1,1 touch_end,10,75 kick_start,35,90 kick_end,40,60 throw_start,20,15 throw_end,55,65" | multikv forceheader=1 | table play_type,xstart,ystart for this result I would like to draw a line from each *_start point to *_end point, so first one is from "touch_start" to "touch_end", "kick_start" to "kick_end" and last one is from "throw_start" to "throw_end". How I can draw a line for each? please advise.
Hi,  I'm using Microsoft Azure Add-on for Splunk 3.1.0 and I've the following error message when I try to get billing and consumption input: 2021-02-15 14:35:00,733 ERROR pid=65719 tid=MainThread f... See more...
Hi,  I'm using Microsoft Azure Add-on for Splunk 3.1.0 and I've the following error message when I try to get billing and consumption input: 2021-02-15 14:35:00,733 ERROR pid=65719 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure_consumption.py", line 92, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/input_module_azure_consumption.py", line 154, in collect_events if len(value) > 0: UnboundLocalError: local variable 'value' referenced before assignment It is a known issue ? Florent
Hi All, I want to filter out the logs during the ingesting time itself so that if the keyword "GET / - 80"  is present in the logs then it should not be ingested into Splunk but the rest of the logs... See more...
Hi All, I want to filter out the logs during the ingesting time itself so that if the keyword "GET / - 80"  is present in the logs then it should not be ingested into Splunk but the rest of the logs should be ingested into Splunk.  I will place the props and transforms in the Heavy Forwarder server so that during parsing it can filter out those logs. Sample logs: 2021-02-15 13:04:28 xxx.xx.xxx.x GET / - 80 - xxx.xx.xx.x - - xxx x x xx 2021-02-15 13:04:27 xxx.xx.xxx.x GET / - 443 - xxx.xx.xx.x - - xxx x x xx where "x" represents number IP's . So kindly help with the props and transforms. The sourcetype is "abc".  
Hi, I have buckets in my pending fixup tasks on my cluster master that have status: "Cannot replicate as bucket hasn't rolled yet". What confuses me is they are all frozen and already been dele... See more...
Hi, I have buckets in my pending fixup tasks on my cluster master that have status: "Cannot replicate as bucket hasn't rolled yet". What confuses me is they are all frozen and already been deleted: Fixup Reason bid=XYZ removed from peer=XYZ, frozen=1 So where does a frozen bucket need to be replicated to?? or is this erros misleading? I am not able to roll the buckets from the Cluster Master UI. I would be happy for any hint. Thank you David
I have a scenario where typical HTTP requests are logged in Splunk. Every request has an unique identifier which is saved in a "request_id" field. Between request and response the server generates ... See more...
I have a scenario where typical HTTP requests are logged in Splunk. Every request has an unique identifier which is saved in a "request_id" field. Between request and response the server generates a set of logs/events and each one has  this "request_id" added. So far so good. Now it is possible to find the appropriate server logs for a client HTTP issue via the "request_id". Now I often also have the requirement to find all errors for a particular device. In this scenario the device identifier is part of the first log/event of the request, but the following logs/events do not have this information any more. So basically something like: 1. req.begin "Started... deviceId=12345", request_id="1" 2. .... "request_id="1" deviceId=??? 3. .... "request_id="1" deviceId=??? 4. .... "request_id="1" deviceId=??? 5. req.end ... "request_id="1" deviceId=??? The search would look like: - Search for all "request.begin" events with device identifier "xyz" - Get all "request_id" of those events - and finally get all events containing one of the above "request_id" Not sure how to build the query for this. Would be very grateful for some tips! Best regards Tore
Hi, I am struggling with some logs in a specific directory. They just don't seem to be ingested into splunk. If I put a normal .log file in with a standard time format it populates just fine. But ... See more...
Hi, I am struggling with some logs in a specific directory. They just don't seem to be ingested into splunk. If I put a normal .log file in with a standard time format it populates just fine. But these logs have the following format: O", "message": "Test logging" } { "time": "2020-12-07 09:46:52.7940", "threadId": "30", "level": "INFO", "message": "Test logging" } { "time": "2020-12-07 12:14:34.7402", "threadId": "53", "level": "INFO", "message": "Test logging" } { "time": "2020-12-07 13:48:24.8650", "threadId": "12", "level": "INFO", "message": "Test logging" } { "time": "2020-12-08 10:33:40.0607", "threadId": "68", "level": "INFO", "message": "Test logging" } { "time": "2020-12-08 11:53:56.7778", "threadId": "51", "level": "INFO", "message": "Test logging" } { "time": "2020-12-09 08:42:53.6465", "threadId": "133", "level": "INFO", "message": "Test logging" } { "time": "2020-12-09 10:35:44.0103", "threadId": "152", "level": "INFO", "message": "Test logging" } { "time": "2020-12-11 10:38:27.0194", "threadId": "113", "level": "INFO", "message": "Test logging" } { "time": "2020-12-11 12:18:25.0442", "threadId": "6", "level": "INFO", "message": "Test logging" } And nothing comes into splunk at all. I have commented out all the timestamp options in the props.conf to force it to use default manner ,but still nothing at all. Is it related to a setting that should be in the props.conf?  Any assistance would be appreciated. Thanks
I have lookup with possible sources and i'm comparing them with the real log events to check if any of them don't sending as expected. The hosts in lookup are without domain but the hosts in logs hav... See more...
I have lookup with possible sources and i'm comparing them with the real log events to check if any of them don't sending as expected. The hosts in lookup are without domain but the hosts in logs have added domain to the hostname. I want to join  both lookup and lists of sending hosts but i need that the command that will join superSide and superSide.computer.level.com as one hostname. I have found answers with the wild card but it seems not working, is there any other nice answer for this problem?