All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers, We have multiple csv files so we need to send data from Universal forwarder to splunk. We tried so many ways we didn't get proper results. Please provide proper stazas for below d... See more...
Hi Splunkers, We have multiple csv files so we need to send data from Universal forwarder to splunk. We tried so many ways we didn't get proper results. Please provide proper stazas for below data. I have attached csv data below. "User","Device type","Device model","UDID","Mac Address","Company Name","OS Version","Agent Version","Latest Config","Policy Name","Last Seen","Registration State","Owner","Hostname","Manufacturer","Config Download Count","Registration TimeStamp","Config Download TimeStamp","Keep Alive Timestamp","Device Hardware Fingerprint","Tunnel Version" "00000000@example.com","IO","Ap dummydevice7,2","958-5F-4-9D-8D999F","00:00:00:00","cbt","Version 11.2.6 (Build 15D100)","1.1.0 (156893) ","No","Unknown","2018-05-14 11:12:19 GMT","Outdated","","","App","1","2018-05-14 11:12:19 GMT","2018-05-14 11:12:19 GMT","2018-05-14 11:12:19 GMT","95-5F-4D-9D-BB", "00000000@example.com","OS","A dummydevice10,3","4D3E8-49-4CB5-93-3FF467910CEC","00:00:00:00:00:00","cbt","Version 12.0 (Build 16A5365b)","1.2 (165787) ","No","Unknown","2018-08-30 15:41:57 GMT","Outdated","","","A","1","2018-08-30 15:25:34 GMT",,"2018-08-30 15:41:57 GMT","4D-EF-4C-93-3F",
Hello team I would like to merge more events into one, currently my events look like this: 1st part {"log":"feign.FeignException$NotFound: status 404 reading xxxxx#getContractDataByContract... See more...
Hello team I would like to merge more events into one, currently my events look like this: 1st part {"log":"feign.FeignException$NotFound: status 404 reading xxxxx#getContractDataByContractUuidDynamicV1(String,String)\n","stream":"stdout","time":"2020-04-28T06:09:41.253478466Z","kubernetes":{xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}} source http:xxx-xxx-xxx 2nd part {"log":"\tat feign.FeignException.clientErrorStatus(FeignException.java:165)\n","stream":"stdout","time":"2020-04-28T06:09:41.253535467Z","xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}} 3rd....nnth parts are following So in props.conf I created the stanza like this: [source::http:xxx-xxx-xxx] SHOULD_LINEMERGE = true MUST_NOT_BREAK_BEFORE = MUST_NOT_BREAK_AFTER = feign.FeignException\$NotFound MUST_BREAK_AFTER = INFO but still I do not see the events being merged. Any ideas where to check in order to debug? Thank you
Hi Experts, I try to ingest data from a mount point, in that path all files are log rotate and moved to .gz format on daily basis. But my splunk ingest only that file that already converted int... See more...
Hi Experts, I try to ingest data from a mount point, in that path all files are log rotate and moved to .gz format on daily basis. But my splunk ingest only that file that already converted into .gz file. So before 24 hours I don't find latest logs. Please suggest how i get latest logs in the Splunk. Sample conf files are : inputs.conf : [monitor:///data/production/logs/JobBatch/documents] disabled = false index = logs_jobbatch sourcetype = JobBatch whitelist = (zipDocument_[\d]+.log.gz|zipDocument_[\d]+.log) props.conf : [JobBatch] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = false category = Custom disabled = false pulldown_type = true BREAK_ONLY_BEFORE = ^\d{4}-\d{2}-\d{2}\s{1}\d{2}:\d{2}:\d{2}.\d{3}\s{1} HEADER_FIELD_LINE_NUMBER = 9 TRUNCATE = 750000 MAX_EVENTS = 1000 BREAK_ONLY_BEFORE_DATE = SHOULD_LINEMERGE = false
Hello All, I am having trouble forwarding CiscoESA (authentication) logs from HF to Indexers. Here are the steps taken to configure it: - Installed Splunk Add-on for Cisco ESA on HF & SH. ... See more...
Hello All, I am having trouble forwarding CiscoESA (authentication) logs from HF to Indexers. Here are the steps taken to configure it: - Installed Splunk Add-on for Cisco ESA on HF & SH. - Copied "Authentication logs" from ESA to HF via SCP - Created following inputs.conf file under Splunk_TA_cisco-esa folder on HF: [monitor:///opt/splunk/etc/apps/Splunk_TA_cisco-esa/data/authentication/authentication.@20200325T075236.s] disabled = false index = ciscoesa sourcetype = cisco:esa:authentication Not sure if I missed anything on HF as Windows events are being forwarder from same HF to Indexer without any issue. Can anyone please suggest what could be the issue. Thanks,
Hello, I have this subsearch command: [search source="local/data/user/logs/access*" status =5* | table request_id] It gets the request_id's from the table and searches for them globally. ... See more...
Hello, I have this subsearch command: [search source="local/data/user/logs/access*" status =5* | table request_id] It gets the request_id's from the table and searches for them globally. I have a service file in which the request_id field is not extracted by default and gets excluded from search results. How can I make sure that the subsearch includes the results from the service file? Here is my command to extract the request_id field from the service file source="/home/user/logs/service*" | rex "Request\sID:\s(?<request_id>\w+)" Thanks
We have upgraded Splunk Enterprise recently to 8.0.2.1 and all the apps in our environment to the latest version. One of them is the Splunk Enterprise Security app to 6.1.0. We started receiving erro... See more...
We have upgraded Splunk Enterprise recently to 8.0.2.1 and all the apps in our environment to the latest version. One of them is the Splunk Enterprise Security app to 6.1.0. We started receiving errors messages as "Health Check: msg="A script exited abnormally with exit status:1" input="opt/splunk/etc/apps/SA-Utils/bin/configuration_check.py stanza="configuration_check://confcheck_escorrelationmigration" . Similar errors are popping for all the input stanzas in SplunkEnterpriseSecuritySuite configuration_check://
Hi, My scenario is that I have a set of commands and I have total hits & total failures for a command in last 30 mins. Let's say Command A has got 100 hits and out of it 30 got failed in last 30 mi... See more...
Hi, My scenario is that I have a set of commands and I have total hits & total failures for a command in last 30 mins. Let's say Command A has got 100 hits and out of it 30 got failed in last 30 mins now I want to check the same total hits & total failures of the same command previous 30 mins and if I see same then I want to check for more previous 30 mins and if I see same kind of failure % then I want to trigger an alert. How can I do this in splunk?
Hello, I have some fields that have multiple values in them and I need to split them out into their own rows. The fields in question start with appliedConditionalAccessPolicies{}.* I've tried a... See more...
Hello, I have some fields that have multiple values in them and I need to split them out into their own rows. The fields in question start with appliedConditionalAccessPolicies{}.* I've tried a few things but no luck. I was looking at the other azuread signin apps and looks like I'm going to have to do this alot. index=azuread sourcetype=ms:aad:signin appDisplayName="Microsoft Teams" |table alternateSignInName appliedConditionalAccessPolicies{}.conditionsNotSatisfied appliedConditionalAccessPolicies{}.conditionsSatisfied appliedConditionalAccessPolicies{}.displayName appliedConditionalAccessPolicies{}.enforcedGrantControls{} appliedConditionalAccessPolicies{}.enforcedSessionControls{} appliedConditionalAccessPolicies{}.id appliedConditionalAccessPolicies{}.result authenticationDetails{}.authenticationMethod authenticationDetails{}.authenticationStepDateTime authenticationDetails{}.authenticationStepRequirement authenticationDetails{}.authenticationStepResultDetail authenticationDetails{}.succeeded authenticationProcessingDetails{}.key authenticationProcessingDetails{}.value authenticationRequirement conditionalAccessStatus createdDateTime deviceDetail.browser deviceDetail.deviceId deviceDetail.displayName deviceDetail.isCompliant deviceDetail.isManaged deviceDetail.operatingSystem deviceDetail.trustType eventtype ipAddress location.city location.countryOrRegion location.geoCoordinates.latitude location.geoCoordinates.longitude location.state mfaDetail.authMethod resourceDisplayName riskState status.additionalDetails status.errorCode status.failureReason tenant userAgent userDisplayName userId userPrincipalName id correlationId resourceId originalRequestId
Hi Guys, I'm trying to convert events data into metric for CPU, Disk, Memory monitoring for Azure PAAS, using below command. index=azure_data metric_name=cpu_percent | eval metric_name=case(... See more...
Hi Guys, I'm trying to convert events data into metric for CPU, Disk, Memory monitoring for Azure PAAS, using below command. index=azure_data metric_name=cpu_percent | eval metric_name=case(metric_name="cpu_percent", "%_Processor_Time")| eval _value=maximum | fields host,metric_name,namespace,_time,_value | eval prefix="Processor." | mcollect index=azure_metrics_summary prefix_field=prefix *here I'm trying to use mcollect to send CPU data to summary index "azure_metrics_summary ". Query is running with no error but not showing _value field in the output and when I'm running below command to search metrics data , Not seeing any output. | mstats avg(_value) count where metric_name="Processor.%_Processor_Time" index=azure_metrics_summary by host
I am trying to get counts of events that match only a particular field value pattern from a multi-valued field. Multi valued field values like: name=abc;name=12345;name=246 name=12344 name... See more...
I am trying to get counts of events that match only a particular field value pattern from a multi-valued field. Multi valued field values like: name=abc;name=12345;name=246 name=12344 name=246;name=abc name=12378 Need counts of events which only contains field values containing name=123* and ignore the once which are combination of others? I did try the below but it includes all events containing name=123* | makemv delim=";" multivalued-field | rex field=multivalued-field "name=(?P[^,]+)," | search whatineed="123*"
Hello everyone ! I need to audit when someone edit the "test" file in the followings paths, for example: opt/tomcat/webapps/file1/file2/file/ nano test and nano opt/tomcat/webapps/file... See more...
Hello everyone ! I need to audit when someone edit the "test" file in the followings paths, for example: opt/tomcat/webapps/file1/file2/file/ nano test and nano opt/tomcat/webapps/file1/file2/file/ test In the first case I see the event made on test file, but in the second one I do not see any event registered. What I should modify in the syslog to register both events in splunk? Thanks in advance !
I have a table with 1 column that is the "delta" column. It contains a value that is the result of subtracting a value from column-A from column-B. I'd like to color the "delta" column as follows: ... See more...
I have a table with 1 column that is the "delta" column. It contains a value that is the result of subtracting a value from column-A from column-B. I'd like to color the "delta" column as follows: if the delta value is >-10% color = Red; -5 to -10% color = orange, 0 to -4% color = yellow; +0 to +5% color = clear, i.e. no color; >+5% color = green. I see I can set a min, mid, max percentage, but I'd like to specify these ranges & colors, if possible.
hi there I have a time chart panel that has 3 nodes in the chart/legend. I want to be add a tick box or equivalent that allows the user to switch between 1/ the timechart view of the 3 node... See more...
hi there I have a time chart panel that has 3 nodes in the chart/legend. I want to be add a tick box or equivalent that allows the user to switch between 1/ the timechart view of the 3 nodes in one chart to 2/ the trellis view which shows 3 seperate time charts(that is, one time chart per node) what is the code to do this using simple XML? pic show the time chart, on top, and the trellis view on the bottom. I want a tick box to be able to switch between the 2, so they appear only in the one panel. code below of the above 2 panes in the pic. <row> <panel> <title>test trelis view</title> <chart> <search> <query>index=core host="snzclakl598" elementType=MSCServer measObjLdn=* measInfoId=83888089 duration=PT900S | timechart span=15m avg(c84163062) as "Outgoing Calls-Seizure Traffic" by userLabel</query> <earliest>@w0</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">none</option> </chart> </panel> </row> <row> <panel> <title>test trelis view - want a tick box that jsut allows this to be turned into a trellis view</title> <chart> <search> <query>index=core host="snzclakl598" elementType=MSCServer measObjLdn=* measInfoId=83888089 duration=PT900S | timechart span=15m avg(c84163062) as "Outgoing Calls-Seizure Traffic" by userLabel</query> <earliest>@w0</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">collapsed</option> <option name="charting.chart">line</option> <option name="charting.drilldown">none</option> <option name="charting.legend.placement">none</option> <option name="trellis.enabled">1</option> </chart> </panel> </row>
Hello, I have been working to enable SSL between a UF and an indexer and am not sure if I follow the usage of the requireClientCert option. It seems to me the purpose of this option is disabling ... See more...
Hello, I have been working to enable SSL between a UF and an indexer and am not sure if I follow the usage of the requireClientCert option. It seems to me the purpose of this option is disabling a two-way handshake between the forwarder and indexer, but the behavior I am seeing is counter to that thought. If I do not point the forwarder's output.conf to a clientCert and sslPassword, I receive this error in the indexer log: 04-27-2020 19:48:52.747 +0000 ERROR TcpInputProc - Error encountered for connection from src=my_fwdr_ip:38694. error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number That's a pretty generic error, but in most cases it means there was a handshake issue between a client and server. Shouldn't the requireClientCert=false negate the necessity for the forwarder to present a cert back to the indexer? Is this a bug? Below are my .confs inputs.conf on indexer [default] host = myhost.mycodomain [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/myco_certs/mychain.pem sslPassword = <redacted> requireClientCert = false outputs.conf on UF [tcpout] disabled = false defaultGroup = splkgroup1 [tcpout:splkgroup1] server = 123.456.123.456:9997 disabled = 0 sslCommonNameToCheck = myco.com sslVerifyServerCert = true
Hello Members, I have seen many,many posts on splunk migration. I am confused. I hope that I can get some direction on how to accomplish this correctly. Current splunk install: Windows 2012 run... See more...
Hello Members, I have seen many,many posts on splunk migration. I am confused. I hope that I can get some direction on how to accomplish this correctly. Current splunk install: Windows 2012 running Splunk Ent 8.0.1 New splunk install: RHEL7 x_64 linux running Splunk Ent 8.0.3 I am going from Windws to Linux. I have seen posts where the suggestion is "copy all from $SPLUNK_HOME to the new instance" This does not quite make sense to me - and all the conf files on the Windows side will all have "non-linux" paths using the "\". I have looked at my indexes.conf file, and other conf files and they have the path expressed Windows style. I have seen another post where you stop the old instance, and copy the buckets to the new instance like this: 1. Roll any hot buckets on the source host from hot to warm. 2. Review indexes.conf on the old host to get a list of the indexes on that host. 3. On the target host, create indexes that are identical to the ones on the source system. 4. Copy the index buckets from the source host to the target host. 5. Restart Splunk Enterprise. I will assume the 5 steps above would be for all indexes, both custom (in the local directory) and default (in the default directory - i would assume that all windows paths would have to be changed to linux style in the indexes.conf file and the inputs.conf file?? I did a test with a simple index that was created just for testing. I created an indexes.conf file on the new server in the /etc/apps/search/local - revised the paths to linux. Then I copied the \var\lib\splunk\test-index directory to the LINUX machine using the forward-slash paths: "/". i then performed a search on this new index on the new server and it works fine. My basic question is if I copy all under $SPLUNK_HOME from windows do I have to change the paths? Or if I try the 5 part list above, does it just mean copy the db data from \var\lib\splunk\ to the new server /var/lib/splunk/, and edit the indexes.conf and inputs.conf files accordingly?? what about the mongo dir?? Thanks so much, Eholz1 - Eric
Here's the Highcharts' stacked area chart demo: https://www.highcharts.com/demo/area-stacked If you click any of the data series in the legend at the bottom, it toggles that data series on and ... See more...
Here's the Highcharts' stacked area chart demo: https://www.highcharts.com/demo/area-stacked If you click any of the data series in the legend at the bottom, it toggles that data series on and off in the chart itself. From what I understand, Highcharts is what Splunk uses for charting, so it should be technically possible. Is this functionality supported in Splunk? This seems like something basic that every major charting library supports. But I looked everywhere in Splunk docs and haven't found anything.
Hi everyone, I have an issue which I can't resolve. I have Googled this a lot but can't understand how I can achieve my goal. I am sending Suricata alerts to Splunk. They are in JSON format. T... See more...
Hi everyone, I have an issue which I can't resolve. I have Googled this a lot but can't understand how I can achieve my goal. I am sending Suricata alerts to Splunk. They are in JSON format. There are tons of alerts, so I want to make a filter. Most of them of course are False-Positive. I know about Suricata disable.conf and threshhold.conf but they can't provide the level of accuracy I need. Disable.conf will totally disable, but this can be appropriate only for a few rules. Threshold.conf can disable alerts based on src_ip or dst_ip , not both, and without mention of ports. So, Suricata doesn't provide any mechanism to effectively filter alerts. Provided methods can only make very "rude" filtering. What I want is to filter events based on src_ip , src_port , dest_ip , dest_port . Also there must be someplace where I could store and update conditions. CSV file and lookup seem suitable for this purpose. For example, I have a regular alert which has these values: signature_id: 1111 src_ip - 192.168.1.1 src_port - 12345 dst_ip - 192.168.1.2 dst_port - 445 To exclude this particular event from search the CSV could be as follows: "sid","s_ip","s_port","d_ip","d_port" "1111","192.168.1.1","12345","192.168.1.2","445" Also, in many cases it will be necessary to use a wildcard * for any field or having more than 1 string for a particular signature. For example: "sid","s_ip","s_port","d_ip","d_port" "9999","192.168.2.3"," * ","192.168.2.5"," * " "9999","192.168.2.4"," * ","192.168.2.5"," * " So, I need to make a search in Splunk that is similar to something like the following: "Show me all alerts except some of them which can be excluded based on conditions from a lookup (which is based on CSV)." I made something close to my goal, but it doesn't work the way that I need it to. index=suricata_alerts NOT ( [inputlookup suricata_alerts_exclusions | fields sid | rename sid as alert.signature_id] AND [inputlookup suricata_alerts_exclusions | fields s_ip | rename s_ip as src_ip] AND [inputlookup suricata_alerts_exclusions | fields s_port | rename s_port as src_port] AND [inputlookup suricata_alerts_exclusions | fields d_ip | rename d_ip as dest_ip] AND [inputlookup suricata_alerts_exclusions | fields d_port | rename d_port as dest_port] ) If I have the next CSV look like this: "sid","s_ip","s_port","d_ip","d_port" "1111","192.168.1.1","12345","192.168.1.2","445" "2222"," * "," * ","192.168.1.2"," * " The next alert will be excluded: signature_id: 1111 src_ip - ANY src_port - ANY dst_ip - 192.168.1.2 dst_port - 445 As you can see, I have strict conditions for Signature ID 1111. But because of wildcards in the third row it applies to any row in the CSV. In other words, this search considers any value from any row. Obviously, my search returns all events which simultaneously correspond to ANY of values in the COLUMN (in CSV). They are not limited by rows. Please give me some advice on how to create a search that will be able to process the aforementioned CSV's correctly. Or maybe there is another way to filter events by "sid","s_ip","s_port","d_ip","d_port" using exact or wildcard values.
In my question I will use a manufacturing monitoring analogy. Employees (uniquely identified by name) work a certain shift. Their names and each completed unit of work are recorded. There ... See more...
In my question I will use a manufacturing monitoring analogy. Employees (uniquely identified by name) work a certain shift. Their names and each completed unit of work are recorded. There are two shifts. While Splunk ingests every manufactured unit we are interested only in totals for a few particular employees on probation. Each employee has a predefined work schedule, but it's not the same for everyone. Each employee has their own quota that does not change often. We have two lookup tables: Employees and Holidays. "Holidays" lists all common days off for the majority of employees - 1 column with data like "2021-01-01". "Employees" provides details about work schedule and quotas. Name Shift Holidays DaysOfWeek DaysOfMonth Quota John 1 Y 6,7 100 Jim 2 N 6,7 15,16,17 3000 Nick 2 Y 1000000 Our search will be scheduled to run twice a day at the end of each shift and needs to output probation employees that were scheduled to work and did meet their quota. We need to disregard all employees who are not in the table We need to filter just the current shift For employees that are not required to work on holidays, we need to exclude holidays For employees that have weekly days off, we need to exclude such days For employees that have monthly days off, we need to exclude them as well For the remaining, we need to compare their total to their quota While I would not have a problem writing an SQL query for the above, I'm not sure about Splunk. I'm also not certain if it's possible to match to comma-separated fields without "unpacking" them.
Why won't this phantom.debug() string perform string interpolation? foo = "bar" phantom.debug("Testing: {foo}") It should read as "Testing: bar"
Hi All, I have enabled the Modular Input for Elasticsearch(ES) and I am able to get in the data. My sample data is metric data that was collected using Metricbeats in ES. Looking at the data ... See more...
Hi All, I have enabled the Modular Input for Elasticsearch(ES) and I am able to get in the data. My sample data is metric data that was collected using Metricbeats in ES. Looking at the data ingested in Splunk, there are a lot of fields that are coming through. Is it possible to selectively index the data into Splunk without changing the configuration or data indices on ES?