All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In looking for an audit event we saw this behavior too... anyone else?   Did you get a response outside of your query?
It could be the first, we do have other defined EXTRACTs and other modifications to data pushed to the indexers and they work properly.  But for some reason this portion of IIS logs just doesn't work... See more...
It could be the first, we do have other defined EXTRACTs and other modifications to data pushed to the indexers and they work properly.  But for some reason this portion of IIS logs just doesn't work properly.   I would have to look into the higher priority, however other IIS sourcetype logs aren't turning out this way.     I do know that the props.conf is in the correct spot.     When we stood up Splunk initially there were custom written apps rather than that of the Splunk Supported TA for IIS.  I may go that route if I can't get this resolved via our custom app.
PaulPanther's link https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues Is where you want to go. Under the "Keep specific events an... See more...
PaulPanther's link https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues Is where you want to go. Under the "Keep specific events and discard the rest" section, you can find stanzas for props.conf and transforms.conf files that you can place in an app on your indexing machines. Setting the regex of the setparsing stanza to "some message" would give you only the events containing that "some message", and discard the rest. # In props.conf [source::/your/log/file/path] TRANSFORMS-set= setnull,setparsing # In transforms.conf: [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = some message DEST_KEY = queue FORMAT = indexQueue (It is assumed that you already have a working inputs.conf file to get the logs into your indexing machines. You can also set the stanza name in the props.conf file to use your log sourcetype)
Ok. This looks better. So the usual suspects are naturally 1. Mismatch between the sourcetype naming in inputs and props (and possibly some overriding settings defined for source or host) 2. Someth... See more...
Ok. This looks better. So the usual suspects are naturally 1. Mismatch between the sourcetype naming in inputs and props (and possibly some overriding settings defined for source or host) 2. Something overriding these parameters - defined elsewhere with higher priority (check with btool) 3. Wrongly placed props.conf (on an indexer when you have a HF in your way). Of course there is also a question of "why aren't you simply using Splunk-supported TA for IIS?".
Your Splunk update has also updated the python urllib3 library to version 1.26.13, but the Splunk_TA_paloalto app expects a version of urllib3 between 1.21.1-1.25 (inclusive). Therefore the palo alto... See more...
Your Splunk update has also updated the python urllib3 library to version 1.26.13, but the Splunk_TA_paloalto app expects a version of urllib3 between 1.21.1-1.25 (inclusive). Therefore the palo alto app is complaining. The ideal solution to this problem is to request the Palo Alto app developers to make the app support urllib3 version 1.26.13. If you would rather not wait for the developers to update the app, you could tell the app to just accept version 1.26.13 and then hope for the best. It might work without a hitch, or it might produce other errors. To force the app to accept urllib 1.26.13, edit the following file:   /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py   In the check_compatibility function, there will be a section for checking urllib3. Change the line "assert minor <= 25" to "assert minor <= 26":   # Check urllib3 for compatibility. major, minor, patch = urllib3_version # noqa: F811 major, minor, patch = int(major), int(minor), int(patch) # urllib3 >= 1.21.1, <= 1.25 assert major == 1 assert minor >= 21 assert minor <= 26   Save the file and reload the app ( or restart Splunkd ), and the error should go away.
It is described in the "route and filter data" document you've been pointed to. One important thing that people often misunderstand at first - if you configure multiple transforms in one transform g... See more...
It is described in the "route and filter data" document you've been pointed to. One important thing that people often misunderstand at first - if you configure multiple transforms in one transform groups - all of them are executed in sequence. So you must define a transform redirecting all events to nullQueue (dropping them) and only after that have a transform sending chosen events to indexQueue.
When you're overwriting the value of _TCP_ROUTING metadata field, you're effectively telling Splunk to route the events to this destination (output group) only. If you want to route some data to mor... See more...
When you're overwriting the value of _TCP_ROUTING metadata field, you're effectively telling Splunk to route the events to this destination (output group) only. If you want to route some data to more than one output group, you must include all relevant output groups in _TCP_ROUTING. Like _TCP_ROUTING = my_primary_indexers, my_secondary_indexers Read the https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Configure_routing Of course you don't have to put the transforms.conf into etc/system/local (in fact it'd be best if you didn't do that).
2024-04-08 02:24:47 10.236.6.10 GET /wps/wcm/webinterface/login/login.jsp "><script>alert("ibm_login_qs_xss.nasl-1712543165")</script> 443 - 10.236.0.223 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT... See more...
2024-04-08 02:24:47 10.236.6.10 GET /wps/wcm/webinterface/login/login.jsp "><script>alert("ibm_login_qs_xss.nasl-1712543165")</script> 443 - 10.236.0.223 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 404 0 2 0 4.35.178.138 2024-04-08 02:24:47 10.236.6.10 GET /cgi-bin/login.php - 443 - 10.236.0.223 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 404 0 2 0 4.35.178.138 2024-04-08 02:24:48 10.236.6.10 GET / - 443 - 10.236.0.223 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 1 4.35.178.138 2024-04-08 02:24:48 10.236.6.10 GET / - 443 - 10.236.0.223 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 0 4.35.178.138 2024-04-08 02:24:48 10.236.6.10 GET / - 443 - 10.236.0.223 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 0 4.35.178.138
So if there is an error seen in the ABC log, then you would like to find the details for that error in the EFG log. You would like to count the number of errors for each correlationId, so that you ca... See more...
So if there is an error seen in the ABC log, then you would like to find the details for that error in the EFG log. You would like to count the number of errors for each correlationId, so that you can later search for that correlation Id and list all of the errors that occurred along with the details message for that correlationId. Is that correct? E.g.: CorrelationId Errors Details abcd-0001 0   abcd-0002 4 Error msg 1 Error msg 2 Error msg 3 Error msg 4 abcd-0003 1 Error msg 1 abcd-0004 2 Error msg 1 Error msg 2  
Can you please paste it into either a preformatted paragraph or a code block? Here the data is already butchered by the forum's mechanics so we can't see the original raw events. Is that whole block ... See more...
Can you please paste it into either a preformatted paragraph or a code block? Here the data is already butchered by the forum's mechanics so we can't see the original raw events. Is that whole block supposed to be in a single line in the IIS log file?
Hi @karthi2809, On your screenshots I noticed you are using Verbose mode, and you see events on "Events" tab of search interface not in the query results that shown in "Statistics" tab. I think you... See more...
Hi @karthi2809, On your screenshots I noticed you are using Verbose mode, and you see events on "Events" tab of search interface not in the query results that shown in "Statistics" tab. I think you need to filter "Response" field values to show only success responses. You can use mvfilter function to filter Response field. I filtered Response values that starts with "PRD". You can update regex inside match to according to your need. Please try adding below eval at the en of your search; | eval Response=mvfilter(match(Response,"^PRD"))  
Assuming your regex well extracts the fields you want, could you try this: # transform.conf: [my-log] REGEX=FICHERO_LOG.*\=\s*( ?<log>.*?)\s*\n MV_ADD=true # props.conf [extractingFields] TRANSFORM... See more...
Assuming your regex well extracts the fields you want, could you try this: # transform.conf: [my-log] REGEX=FICHERO_LOG.*\=\s*( ?<log>.*?)\s*\n MV_ADD=true # props.conf [extractingFields] TRANSFORMS-ArbitraryName1 = my-log TRANSFORMS-ArbitraryName2 = other_transforms_stanza Note that the MV ADD field is MV_ADD, not MV-AD
Still no dice on that.  It only happens to these few logs that are formatted this way.  Could there be anything else preventing it from breaking apart properly?    
Hi @tscroggins  Really appreciate your comments, I'm currently working with the changes You've suggested.  Thanks and Regards,
Hello. How do i determine what the original source name may be?
We have several summary searches that collect data into metric indexes. They run nightly and some of them create quite a large number of events (~100k). As a result we sometimes see warnings, that th... See more...
We have several summary searches that collect data into metric indexes. They run nightly and some of them create quite a large number of events (~100k). As a result we sometimes see warnings, that the metric indexes cannot be optimised fast enough. A typical query looks like   index=uhdbox sourcetype="tvclients:log:analytics" name="app*" name="*Play*" OR name="*Open*" earliest=-1d@d+3h latest=-0d@d+3h | bin _time AS day span=24h aligntime=@d+3h | stats count as eventCount earliest(_time) as _time by day, eventName, releaseTrack, partnerId, deviceId | fields - day | mcollect index=uhdbox_summary_metrics split=true marker="name=UHD_AppsDetails, version=1.1.0" eventName, releaseTrack, partnerId, deviceId     The main contributor to the large number of events is the cardinality of deviceId (~100k) which effectively is a "MAC" address with a common prefix and defined length. I could create 4 / 8 /16 reports each selecting a subset of deviceIds and schedule them at different times, but it would be quite a burden to maintain those basicly identical copies. So... I wonder if there is a mechanism to shard the search results and feed them it into many separate mcollects that are spaced apart by some delay. Something like   index=uhdbox sourcetype="tvclients:log:analytics" name="app*" name="*Play*" OR name="*Open*" earliest=-1d@d+3h latest=-0d@d+3h | shard by deviceId bins=10 sleep=60s | stats count as eventCount earliest(_time) as _time by day, eventName, releaseTrack, partnerId, deviceId | fields - day | mcollect index=uhdbox_summary_metrics split=true marker="name=UHD_AppsDetails, version=1.1.0" eventName, releaseTrack, partnerId, deviceId   Maybe my pseudo code above is not so clear. What I would like to achieve is, that instead of one huge mcollect I get 10 mcollects (each for a approximately 1/10th of the events). They should be scheduled approximately 60s apart from each other...
What @PickleRick points out is that event snippets in your illustration do not contain necessary fields used in your search. (Side lesson #1: Screenshots do not help anything except in explaining exp... See more...
What @PickleRick points out is that event snippets in your illustration do not contain necessary fields used in your search. (Side lesson #1: Screenshots do not help anything except in explaining expected and actual visualization.)  Let me demonstrate with the followi First of all, none of your illustrations explains where the JSON path content.payload{} comes from.  You subsequently put this extracted field in mvexpand.  Splunk will give you an error about nonexistent field content.payload{}.  Until you can demonstrate that this JSON path exist somewhere in your data, your illustrated full search cannot succeed. (Side lesson #2: Complicated SPL does not help diagnosis.  Not only do they discourage others from reading and understanding your message, they also blur your own thought process.  Distill the search to the point where you can clearly illustrate a "yes"-"no" choice.) Secondly, your illustrations do not produce any value for JobType, which according to your search, comes from   | eval JobType=case(like('message',"%Concur Ondemand Started%"),"OnDemand", like('message',"%API: START: /v1/expense/extract/ondemand%"),"OnDemand", like('message',"Expense Extract Process started%"),"Scheduled")   In other words, none of your illustrated JSON match any of the three conditions, therefore | where JobType!=" " will give you no result. To illustrate the above two points, let's comment out the problematic portions of the SPL and see what comes out from your data snippets:   | search NOT message IN ("API: START: /v1/expense/extract/ondemand/accrual*") ```| spath content.payload{} | mvexpand content.payload{} ``` |stats values(content.SourceFileName) as SourceFileName values(content.JobName) as JobName values(content.loggerPayload.archiveFileName) as ArchivedFileName values(message) as message min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time by correlationId | rex field=message max_match=0 "Expense Extract Process started for (?<FileName>[^\n]+)" | rex field=message max_match=0 "API: START: /v1/expense/extract/ondemand/(?<OtherRegion>[^\/]+)\/(?<OnDemandFileName>\S+)" | eval OtherRegion=upper(OtherRegion) | eval OnDemandFileName=rtrim(OnDemandFileName,"Job") | eval "FileName/JobName"= coalesce(OnDemandFileName,JobName) | eval JobType=case(like('message',"%Concur Ondemand Started%"),"OnDemand",like('message',"%API: START: /v1/expense/extract/ondemand%"),"OnDemand",like('message',"Expense Extract Process started%"),"Scheduled") | eval Status=case(like('message' ,"%Concur AP/GL File/s Process Status%"),"SUCCESS", like('tracePoint',"%EXCEPTION%"),"ERROR") | eval Region= coalesce(Region,OtherRegion) | eval OracleRequestId=mvappend("RequestId:",RequestID,"ImpConReqid:",ImpConReqId) | eval Response= coalesce(message,error,errorMessage) | eval StartTime=round(strptime(Logon_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=round(strptime(Logoff_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval ElapsedTimeInSecs=EndTime-StartTime | eval "Total Elapsed Time"=strftime(ElapsedTimeInSecs,"%H:%M:%S") | eval match=if(SourceFileDTLCount=TotalAPGLRecordsCountStaged,"Match","NotMatch") | rename Logon_Time as Timestamp | table Status JobType Response ArchivedFileName ElapsedTimeInSecs "Total Elapsed Time" correlationId | fields - ElapsedTimeInSecs priority match ```| where JobType!=" " | search Status="*"```   Status JobType Response ArchivedFileName TotalElapsedTime correlationId SUCCESS   Before calling flow post-PInvoice-SubFlow Concur AP/GL File/s Process Status PRD(SUCCESS): Concur AP/GL Extract - Expense Report. Concur Batch ID: 398 Company Code: 755 Operating Unit: BZ_OU PRD(SUCCESS): Concur AP/GL Extract - Expense Report. Concur Batch ID: 398 Company Code: 725 Operating Unit: AB_OU     19554d60     After calling flow SubFlow PRD(SUCCESS): Concur AP/GL Extract- Expense Report. Concur Batch ID: 450 Company Code: 725 Operating Unit: AB_OU Post - Expense Extract processing to Oracle     43b856a1     After calling flow post-APInvoice-SubFlow Before calling flow post-APInvoice-SubFlow Concur Process Status ISG AP Response PRD(SUCCESS): Concur AP/GL Extract - AP Expense Report. Concur Batch ID: 95 Post - Expense Extract processing to Oracle     9a1219f2 As you can see, only one correlationId has non-null Status, and none of them have any field other than Response.  This is a common troubleshooting technique: reduce search complexity to reveal the parts that make a difference. The following is an emulation of the data snippets you illustrated.  Play with it and compare with your real data   | makeresults | eval data = mvappend("{ \"correlationId\" : \"43b856a1\", \"message\" : \"Post - Expense Extract processing to Oracle\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\" }", "{ \"correlationId\" : \"43b856a1\", \"message\" : \"After calling flow SubFlow\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\" }", "{ \"correlationId\" : \"43b856a1\", \"message\" : \"PRD(SUCCESS): Concur AP/GL Extract- Expense Report. Concur Batch ID: 450 Company Code: 725 Operating Unit: AB_OU\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\" }", "{ \"correlationId\" : \"19554d60\", \"message\" : \"PRD(SUCCESS): Concur AP/GL Extract - Expense Report. Concur Batch ID: 398 Company Code: 755 Operating Unit: BZ_OU\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"19554d60\", \"message\" : \"Concur AP/GL File/s Process Status\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"19554d60\", \"message\" : \"PRD(SUCCESS): Concur AP/GL Extract - Expense Report. Concur Batch ID: 398 Company Code: 725 Operating Unit: AB_OU\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"19554d60\", \"message\" : \"Before calling flow post-PInvoice-SubFlow\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"9a1219f2\", \"message\" : \"Before calling flow post-APInvoice-SubFlow\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"9a1219f2\", \"message\" : \"PRD(SUCCESS): Concur AP/GL Extract - AP Expense Report. Concur Batch ID: 95\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"9a1219f2\", \"message\" : \"Post - Expense Extract processing to Oracle\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"9a1219f2\", \"message\" : \"Concur Process Status\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"9a1219f2\", \"message\" : \"ISG AP Response\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }", "{ \"correlationId\" : \"9a1219f2\", \"message\" : \"After calling flow post-APInvoice-SubFlow\", \"tracePoint\" : \"FLOW\", \"priority\" : \"INFO\", }") | mvexpand data | rename data AS _raw | spath ``` data emulation for index="mulesoft" applicationName="s-concur-api" environment=PRD priority timestamp NOT message IN ("API: START: /v1/expense/extract/ondemand/accrual*") ```    
Hello jconger, Thanks for the comment. My system admin did the registration and added the permissions during the original setup.  We're not getting any message trace data. I can ask him to double-ch... See more...
Hello jconger, Thanks for the comment. My system admin did the registration and added the permissions during the original setup.  We're not getting any message trace data. I can ask him to double-check the roles, but as far as I know, it was done properly. 
This source does not seem to match the visualisation you have shown earlier. Are you using a trellis of singles or not?