All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

First question is whether you indeed have special characters which are displayed this way or whether they were rendered before/on ingest and are stored as literal "\xsomething" strings. Because that ... See more...
First question is whether you indeed have special characters which are displayed this way or whether they were rendered before/on ingest and are stored as literal "\xsomething" strings. Because that will change the way you must match them.  
Its not binary, more like hex-encoded, see below: \x00}\x00\x00ye\xBBE\x9A9\xEA!\xBE<\x8F$W\xBB\xC9EP\xA3\x8Ff\xECn_\x9D\xEB\xE8\xF8i\xDE\xD7\x00\x00,\x00\x9F\x00k\x00\xA3\x00j\x009\x008\x00... See more...
Its not binary, more like hex-encoded, see below: \x00}\x00\x00ye\xBBE\x9A9\xEA!\xBE<\x8F$W\xBB\xC9EP\xA3\x8Ff\xECn_\x9D\xEB\xE8\xF8i\xDE\xD7\x00\x00,\x00\x9F\x00k\x00\xA3\x00j\x009\x008\x00\x9D\x00=\x005\x00\xA2\x00@\x002\x00\x9E\x00g\x003\x00\x9C\x00<\x00/\x00\x00\x00  1. yes, out input.conf attached above. after every change we restart splunk services 2. Were trying to get approval from ePO admin to run wireshark on the server, if not well just generate MER logs and send them back to ePO support
And you have those results in multivalued fields? In separate result rows?
Hi @inayshon, what kind of issue are you experiencing? anyway, you should be able to find your dashboard in the app where you were when you created the dashboard in the "Dashboards" section, obviou... See more...
Hi @inayshon, what kind of issue are you experiencing? anyway, you should be able to find your dashboard in the app where you were when you created the dashboard in the "Dashboards" section, obviously only accessing Splunk using your account and not a different one. If in your app, you haven't the Dashboards section, you can manually write in the url bar: http://<your_splunk_server>:8000/en-US/app/<your_app>/dashboards Ciao. Giuseppe
Having issues accessing my dashboard that I'm seeing usung my coursera course link...  
If you're getting binary data in your events, that means that TLS is not enabled properly on that port. So the way to go about it would be: 1) Configure TLS on that port (which you supposedly did),... See more...
If you're getting binary data in your events, that means that TLS is not enabled properly on that port. So the way to go about it would be: 1) Configure TLS on that port (which you supposedly did), restart the receiver (did you?) verify the connectivity with openssl s_client -connect 2) Test connectivity from ePO, check logs on both sides for TLS-related errors. 3) If that doesn't give you any clues, do a tcpdump from the traffic and see what parameters both sides demand/offer.
Hi @ezamit , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poin... See more...
Hi @ezamit , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @maverick27 , you could use eval coalesce, something like this: index=index1 OR index=index2 | eval new_field=coalesce(field1, field2) | table new_field Ciao. Giuseppe
Thank you for the link to the diagram
I can find events without timestamps by using regex It is not about the _time field but about the existence of "time" at the event Apparanly, my first explanation is not good enough
Hi Team, Can you please let me know what is the limit for excluding the Pages, iFrames , Virtual Pages and Ajax calls.
Hi @ITWhisperer , I have two datasets from two search queries. I need to fetch the common as well as distinct values from both the datasets in the final result.  Something like this: Field1 Fi... See more...
Hi @ITWhisperer , I have two datasets from two search queries. I need to fetch the common as well as distinct values from both the datasets in the final result.  Something like this: Field1 Field2 Result 1 2 1 3 4 2 5 6 3 7 8 4 9 10 5 10   6     7     8     9     10   Can you please help with the query?
Hi, Were trying to connect ePO via syslog to splunk, weve followed the steps provided in the ePO add-on documentation and were able to capture logs from ePO. However the logs are encrypted, raisin... See more...
Hi, Were trying to connect ePO via syslog to splunk, weve followed the steps provided in the ePO add-on documentation and were able to capture logs from ePO. However the logs are encrypted, raising this concern to our ePO support he suggested 2 things: 1. Enable the supported TLS/cipher suites by ePO on the splunk side 2. Add the splunk as a registered server and make sure test Syslog is successful From the Splunk documentation we followed, were always getting failed test syslog and scouring around different docs and community posts on other SIEM brands, most seem to have had success (on connecting to ePO) once they have verified the supported cipher suite of the ePO exists and is enforced on their collector. Going from this, is there a way to check/verify which cipher suites are used by Splunk. Ive seen the document regarding Splunk TLS, and it seems that the supported cipher suites for ePO are included in the default however is there a way to verify this?  Our setup is as follows: - Configured HF on a Win server - Configured inputs.conf as below:  
Can we monitor the whatsapp chat bot used for mobile banking to know its performance using mobile real user or synthetic monitoring ?
I have a multivalue field and am hoping I can get help to replace all the non-alphanumeric characters within a specific place within each value of the mvfield.  I am taking this multivalue field and ... See more...
I have a multivalue field and am hoping I can get help to replace all the non-alphanumeric characters within a specific place within each value of the mvfield.  I am taking this multivalue field and creating a new field but my regex is simply ignoring entries whenever there is a special character.  I have to ignore these characters, so I'm trying to find way to remove those characters before it reaches my eval statement to create the new field. I know the problem is the capture group around the "name" value as it only allows \w and \s. name\x22\x3a(?:\s+)?\x22([\w\s]+)\x22. But I'm not sure how to fix it.  I've tried extracting the name field first, using sed to remove the characters, but then don't know how to "re-inject" it back into the mv-field or build my new field but reference the now clean name field. Any ideas??? Sample Data {"bundle": "com.servicenow.blackberry.ful", "name": "ServiceNow Agent\u00ae - BlackBerry", "name_version": "ServiceNow Agent\u00ae - BlackBerry-17.2.0", "sw_uid": "faa5c810a2bd2d5da418d72hd", "version": "17.2.0", "version_raw": "0000000170000000200000000"} {"bundle": "com.penlink.pen", "name": "PenPoint", "name_version": "PenPoint-1.0.1", "sw_uid": "cba7d3601855e050d8new0f34", "version": "1.0.1", "version_raw": "0000000010000000000000001"}   SPL to create new field | eval new = if(sourcetype=="custom:data", mvmap(old_field,replace(old_field,"\x7b.*?\x22bundle\x22\x3a\s+\x22((?:net|jp|uk|fr|se|org|com|gov)\x2e(\w+)\x2e.*?)\x22.*?name\x22\x3a(?:\s+)?\x22([\w\s]+)\x22.*?\x22sw_uid\x22\x3a(?:\s+)?\x22(?:([a-fA-F0-9]+)|[\w_:]+)\x22.*?\x22version\x22\x3a(?:\s+)?\x22(.*?)\x22.*$","cpe:2.3:a:\2:\3:\5:*:*:*:*:*:*:* - \1 - \4")),new)   This creates one good and one bad entry {"bundle": "com.servicenow.blackberry.ful", "name": "ServiceNow Agent\u00ae - BlackBerry", "name_version": "ServiceNow Agent\u00ae - BlackBerry-17.2.0", "sw_uid": "faa5c810a2bd2d5da418d72hd", "version": "17.2.0", "version_raw": "0000000170000000200000000"} cpe:2.3:a:penlink:PenPoint:1.0.1:*:*:*:*:*:*:* - com.penlink.penpoint - cba7d3601855e050d8new0f34  
We have a file that is rotated at midnight every night.  The file is renamed and zipped up.  Sometimes after the log rotation Splunk does not ingest the new file. There are no errors in the Splunkd... See more...
We have a file that is rotated at midnight every night.  The file is renamed and zipped up.  Sometimes after the log rotation Splunk does not ingest the new file. There are no errors in the Splunkd log relating to crc or anything along those lines. A restart of Splunk resolves the issue however we would like to find a more permanent solution. We are on UF version, 9.0.4.   Appreciate any suggestions you may have
Hello I need a proxy connection when I use TA-tenable-easm on splunk. Is there a way or a guide to set up proxy on TA-tenable-easm?  
Hello, How to display date range from the time range dropdown selector in the Dashboard Studio? Thank you for your help I am currently using Visualization Type " Table" and create data configurati... See more...
Hello, How to display date range from the time range dropdown selector in the Dashboard Studio? Thank you for your help I am currently using Visualization Type " Table" and create data configuration with the following search: info_min_time & info_max_time gave me duplicate data for each row and I had to use dedup Is this a proper way to do it? Is there a way to use the time token ($timetoken.earliest$ or $timetoken.latest$) from the time range dropdown selector in the search from data configuration (not in XML) index=test | addinfo | eval info_min_time="From: ". strftime(info_min_time,"%b %d %Y %H:%M:%S") | eval info_max_time="To: ". strftime(info_max_time,"%b %d %Y %H:%M:%S") | dedup info_min_time, info_max_time | table info_min_time, info_max_time  
I am new to splunk and I have inherited a system that forwards log in CEF CSV format.  These logs are then tar'd up and sent to the distant end (which does happen successfully).  The issue I have is ... See more...
I am new to splunk and I have inherited a system that forwards log in CEF CSV format.  These logs are then tar'd up and sent to the distant end (which does happen successfully).  The issue I have is when the splunk server picks up the CEF CSV it has epoch time as the first entry of every log in the CEF CSV file.  This makes the next hop/stop aggregator I send to unhappy.   original host (forwarder) -> splunk host -> splunk host -> master aggregator (arcsight type server) example: 1706735561, "blah blah blah" the file cef.csv says it's doing "_time","_raw" When I look at what I think is the setup for time (etc/datetime.xml), _time does not have anything about epoch or %s in there. How do I configure the CEF CSV to omit the epoch time? As I mentioned earlier, I am totally new to splunk.  Any help would be fantastic.
Hi, We came across strange issue: cvs logs are not getting ingested when it only has only one line (in addition to the header) in a log. The same logs with two and more lines are ingested succes... See more...
Hi, We came across strange issue: cvs logs are not getting ingested when it only has only one line (in addition to the header) in a log. The same logs with two and more lines are ingested successfully Here are inputs.conf and  props.conf we are using Inputs.conf [monitor:///apps/ab_cd/resources/abcd/reports_rr/reports/abc/.../*_splunk.csv] sourcetype=source_type_name index=index_name ignoreOlderThan = 2h crcSalt = <SOURCE> props.conf [source_type_name] KV_MODE = none NO_BINARY_CHECK = true SHOULD_LINEMERGE = false PREAMBLE_REGEX = ^Region TIME_PREFIX= ^(?:[^,\n]*,){1} TIME_FORMAT = %Y-%m-%d MAX_TIMESTAMP_LOOKAHEAD=10 MAX_DAYS_HENCE = 5 Appreciate all the ideas