All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, do you have any solutions? I'm trying to upload files from sharepoint to splunk enterprise as well.
Thank you
Thank you! It did help 
If you add the following after your timechart command it will change the values from numbers to percentages | addtotals fieldname=_Total | foreach * [ eval <<FIELD>>=round(('<<FIELD>>'/_Total*100),2... See more...
If you add the following after your timechart command it will change the values from numbers to percentages | addtotals fieldname=_Total | foreach * [ eval <<FIELD>>=round(('<<FIELD>>'/_Total*100),2) ] Note that the _ in front of the total field name prevents it from being displayed, then the foreach command just calculates the percentages.  
What you suggest is not possible in a single search. Assuming the cardinality does not change much over the 24h period I don't suppose there is benefit in running the search hourly - which would prod... See more...
What you suggest is not possible in a single search. Assuming the cardinality does not change much over the 24h period I don't suppose there is benefit in running the search hourly - which would produce more metrics and would need to be aggregated on consumption. However, you could create N searches where the body of a search is a single macro, which runs your base SPL and you call the macro with the device id prefixes you want to search for. Not an elegant solution - but functional.  I don't understand the message you say you are getting though - I am not familiar with that - secondly what is the impact of that message occurring - does it break the collected data in some way and does it stop other searches from working?
Thanks ITWhisperer! I did try the string conversion, but it did not work. This looks like it did the trick!
Try something like this index=email2 sourcetype=my_sourcetype source_user=* [ search index=email1 sourcetype=my_sourcetype source_user=* | eval recipient = source_user | fields recipient | dedup rec... See more...
Try something like this index=email2 sourcetype=my_sourcetype source_user=* [ search index=email1 sourcetype=my_sourcetype source_user=* | eval recipient = source_user | fields recipient | dedup recipient | format]
| eval IN = strptime(in, "%Y%m%d%H%M%S") | eval OUT = strptime(out, "%Y%m%d%H%M%S") | eval Duration = tostring(OUT - IN,"duration")
I am trying to find the duration for a time span. The "in" and "out" numbers are included in the data as type: number. I attempted: in = 20240401183030 out = 20240401193030 | convert mktime(in) AS... See more...
I am trying to find the duration for a time span. The "in" and "out" numbers are included in the data as type: number. I attempted: in = 20240401183030 out = 20240401193030 | convert mktime(in) AS IN | convert mktime(out) AS OUT | eval Duration =OUT - IN I have not been able to find a function that would directly convert number to time or if there is some multifunctional way to get the right duration between the two, But this does not perform the correct time math. 
Hi all, thank in advance for your time! I have a problem writing a properly working query with this case study: I need to take data from index=email1 to find matching data from index=email2. I ... See more...
Hi all, thank in advance for your time! I have a problem writing a properly working query with this case study: I need to take data from index=email1 to find matching data from index=email2. I tried to do it this way: from index=email1 I take the fields src_user and recipient and use the appropriate search to look for it in the email2 index. Query examples that I used: index=email1 sourcetype=my_sourcetype source_user=* [ search index=email2 sourcetype=my_sourcetype source_user=* | fields source_user ] OR index=email1 sourcetype=my_sourcetype | join src_user, recipient [search index=emai2 *filters*] Everything looked OK in the control sample (I found events in a 10-minute window, e.g. 06:00-06:10), which at first glance matched, but when I extended the search time, e.g. to 24h, it did not show me any events, even those that matched in a short time window (even though they were in these 24 hours). Thank you for any ideas or solutions for this case.
The Splunk OVA for VMware Metrics documentation at https://docs.splunk.com/Documentation/OVAVMWmetrics/4.3.0/Config/About describes its operating system and update policy: OS: Red Hat Enterprise Lin... See more...
The Splunk OVA for VMware Metrics documentation at https://docs.splunk.com/Documentation/OVAVMWmetrics/4.3.0/Config/About describes its operating system and update policy: OS: Red Hat Enterprise Linux release 9.2 (Plow) OS Update Policy: "You're responsible for the patches introduced in the operating system installed on the OVA. Make sure to regularly update the operating system to avoid vulnerabilities. There is no backward compatibility for the OVA." Splunk version: 9.1.0.2 Unfortunately the docs don't describe a process for updating the Splunk version. It is likely that you technically could update the splunk installation on it, but it is not officially supported.
In looking for an audit event we saw this behavior too... anyone else?   Did you get a response outside of your query?
It could be the first, we do have other defined EXTRACTs and other modifications to data pushed to the indexers and they work properly.  But for some reason this portion of IIS logs just doesn't work... See more...
It could be the first, we do have other defined EXTRACTs and other modifications to data pushed to the indexers and they work properly.  But for some reason this portion of IIS logs just doesn't work properly.   I would have to look into the higher priority, however other IIS sourcetype logs aren't turning out this way.     I do know that the props.conf is in the correct spot.     When we stood up Splunk initially there were custom written apps rather than that of the Splunk Supported TA for IIS.  I may go that route if I can't get this resolved via our custom app.
PaulPanther's link https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues Is where you want to go. Under the "Keep specific events an... See more...
PaulPanther's link https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues Is where you want to go. Under the "Keep specific events and discard the rest" section, you can find stanzas for props.conf and transforms.conf files that you can place in an app on your indexing machines. Setting the regex of the setparsing stanza to "some message" would give you only the events containing that "some message", and discard the rest. # In props.conf [source::/your/log/file/path] TRANSFORMS-set= setnull,setparsing # In transforms.conf: [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = some message DEST_KEY = queue FORMAT = indexQueue (It is assumed that you already have a working inputs.conf file to get the logs into your indexing machines. You can also set the stanza name in the props.conf file to use your log sourcetype)
Ok. This looks better. So the usual suspects are naturally 1. Mismatch between the sourcetype naming in inputs and props (and possibly some overriding settings defined for source or host) 2. Someth... See more...
Ok. This looks better. So the usual suspects are naturally 1. Mismatch between the sourcetype naming in inputs and props (and possibly some overriding settings defined for source or host) 2. Something overriding these parameters - defined elsewhere with higher priority (check with btool) 3. Wrongly placed props.conf (on an indexer when you have a HF in your way). Of course there is also a question of "why aren't you simply using Splunk-supported TA for IIS?".
Your Splunk update has also updated the python urllib3 library to version 1.26.13, but the Splunk_TA_paloalto app expects a version of urllib3 between 1.21.1-1.25 (inclusive). Therefore the palo alto... See more...
Your Splunk update has also updated the python urllib3 library to version 1.26.13, but the Splunk_TA_paloalto app expects a version of urllib3 between 1.21.1-1.25 (inclusive). Therefore the palo alto app is complaining. The ideal solution to this problem is to request the Palo Alto app developers to make the app support urllib3 version 1.26.13. If you would rather not wait for the developers to update the app, you could tell the app to just accept version 1.26.13 and then hope for the best. It might work without a hitch, or it might produce other errors. To force the app to accept urllib 1.26.13, edit the following file:   /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py   In the check_compatibility function, there will be a section for checking urllib3. Change the line "assert minor <= 25" to "assert minor <= 26":   # Check urllib3 for compatibility. major, minor, patch = urllib3_version # noqa: F811 major, minor, patch = int(major), int(minor), int(patch) # urllib3 >= 1.21.1, <= 1.25 assert major == 1 assert minor >= 21 assert minor <= 26   Save the file and reload the app ( or restart Splunkd ), and the error should go away.
It is described in the "route and filter data" document you've been pointed to. One important thing that people often misunderstand at first - if you configure multiple transforms in one transform g... See more...
It is described in the "route and filter data" document you've been pointed to. One important thing that people often misunderstand at first - if you configure multiple transforms in one transform groups - all of them are executed in sequence. So you must define a transform redirecting all events to nullQueue (dropping them) and only after that have a transform sending chosen events to indexQueue.
When you're overwriting the value of _TCP_ROUTING metadata field, you're effectively telling Splunk to route the events to this destination (output group) only. If you want to route some data to mor... See more...
When you're overwriting the value of _TCP_ROUTING metadata field, you're effectively telling Splunk to route the events to this destination (output group) only. If you want to route some data to more than one output group, you must include all relevant output groups in _TCP_ROUTING. Like _TCP_ROUTING = my_primary_indexers, my_secondary_indexers Read the https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Configure_routing Of course you don't have to put the transforms.conf into etc/system/local (in fact it'd be best if you didn't do that).
2024-04-08 02:24:47 10.236.6.10 GET /wps/wcm/webinterface/login/login.jsp "><script>alert("ibm_login_qs_xss.nasl-1712543165")</script> 443 - 10.236.0.223 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT... See more...
2024-04-08 02:24:47 10.236.6.10 GET /wps/wcm/webinterface/login/login.jsp "><script>alert("ibm_login_qs_xss.nasl-1712543165")</script> 443 - 10.236.0.223 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 404 0 2 0 4.35.178.138 2024-04-08 02:24:47 10.236.6.10 GET /cgi-bin/login.php - 443 - 10.236.0.223 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 404 0 2 0 4.35.178.138 2024-04-08 02:24:48 10.236.6.10 GET / - 443 - 10.236.0.223 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 1 4.35.178.138 2024-04-08 02:24:48 10.236.6.10 GET / - 443 - 10.236.0.223 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 0 4.35.178.138 2024-04-08 02:24:48 10.236.6.10 GET / - 443 - 10.236.0.223 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 0 4.35.178.138
So if there is an error seen in the ABC log, then you would like to find the details for that error in the EFG log. You would like to count the number of errors for each correlationId, so that you ca... See more...
So if there is an error seen in the ABC log, then you would like to find the details for that error in the EFG log. You would like to count the number of errors for each correlationId, so that you can later search for that correlation Id and list all of the errors that occurred along with the details message for that correlationId. Is that correct? E.g.: CorrelationId Errors Details abcd-0001 0   abcd-0002 4 Error msg 1 Error msg 2 Error msg 3 Error msg 4 abcd-0003 1 Error msg 1 abcd-0004 2 Error msg 1 Error msg 2