All Topics

Top

All Topics

Hai All, we have some data coming from splunk DB connect and one field has RAW data as below  how to convert the  json payload data into readable format as i have attached pic how it should convert... See more...
Hai All, we have some data coming from splunk DB connect and one field has RAW data as below  how to convert the  json payload data into readable format as i have attached pic how it should convert and below is the json data  The field we want to perform json operations on is report_json tried with below search but not working and is anything we need to update in the DB query end to get the output index="test1"  | search NOT errors="*warning Puppet*" NOT errors="*Permission*" report_json=* | eval json_string=json(report_json), test=report_json | table json_string, test, len(test)
Hi All,   we are getting error  "unable_to_write_batchjava.net.SocketTimeoutException: Read timed out" in Splunk DBconnect . == [Scheduled-Job-Executor-0] ERROR c.s.d.s.task.listeners.RecordWrite... See more...
Hi All,   we are getting error  "unable_to_write_batchjava.net.SocketTimeoutException: Read timed out" in Splunk DBconnect . == [Scheduled-Job-Executor-0] ERROR c.s.d.s.task.listeners.RecordWriterMetricsListener - action=unable_to_write_batch java.net.SocketTimeoutException: Read timed out at java.base/java.net.SocketInputStream.socketRead0(Native Method) at java.base/java.net.SocketInputStream.socketRead(SocketInputStream.java:115) at java.base/java.net.SocketInputStream.read(SocketInputStream.java:168) at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140) at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:478) at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:472) at java.base/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70) at java.base/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1454) at java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1065) at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153) at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:280) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56) at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259) at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163) at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157) at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273) at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125) at com.codahale.metrics.httpclient.InstrumentedHttpRequestExecutor.execute(InstrumentedHtt ==   Please suggest.
Hello, I have this dashboard with this 3 fields (ID, A1_Links, A2_Links). The goal is to have the count of the total of ID's containing links, based on the A1 and A2 Links columns. (How many ID... See more...
Hello, I have this dashboard with this 3 fields (ID, A1_Links, A2_Links). The goal is to have the count of the total of ID's containing links, based on the A1 and A2 Links columns. (How many ID's containing A1 links and how many ID's containing A2 links) How can I do that?   index="" host= sourcetype=csv source=CW27.csv | dedup ID | table ID A1_Links A2_Links
Hi, We have our license keys for the next 3 years but each one only last 1 year. Is it possible to apply all 3 license keys for the next 3 years instead of waiting close to expiration each year? Tha... See more...
Hi, We have our license keys for the next 3 years but each one only last 1 year. Is it possible to apply all 3 license keys for the next 3 years instead of waiting close to expiration each year? That way we don't have to worry about them expiring.
Hi All, How do we check for armis app alert logs in cloud, recently We have updated the app so how we can  check for the logs?
I have 2 queries and joining it with "Join" using the common field "SessionID". With  the below query I'm just getting the results if there are results from both the search. If there is no result ... See more...
I have 2 queries and joining it with "Join" using the common field "SessionID". With  the below query I'm just getting the results if there are results from both the search. If there is no result for either the parent search or the sub search the result is not getting printed. For example if there is no LogoutTime available from the sub search, the results of parent search is not getting printed and. Is there any way to achieve the desired result. index = test "testrequest" | rex "(?:.+email\=)(?<Email>[a-zA-Z0-9_\-\@\.]+)" | rex "(?:.+trasactionId\=)(?<TransactionID>[a-zA-Z0-9-]+)" | rex "(?:.+TransactionTime\=)(?<LoginTime>[a-zA-Z0-9\s:]+EDT)" | rex "(?:.+Status\=)(?<Status>\w+)" | rex "(?:.+TimeTaken\=)(?<TimeTaken>\d+)" | rex "(?:.+\+\+)(?<SessionID>[a-zA-Z0-9-_:@.]+)(?:\:Status)" | table Email,TransactionID,LoginTime,Status,TimeTaken,SessionID | join SessionID [search index = test "testrespone" | rex "(?:.+TransactionTime\=)(?<LogoutTime>[a-zA-Z0-9\s:]+EDT)" | rex "(?:.+SessionId\=)(?<SessionID>[a-zA-Z0-9-_:@.]+)(?:\:Status)" | table SessionID,LogoutTime] | table Email,TransactionID,LoginTime,Status,TimeTaken,SessionID,LogoutTime
Hi, How can I configure a search query to run everyday between 5am to 11 :30 am IST in splunk search query. I don't want to save it as a report but Im using this search in a dashboard and it has ... See more...
Hi, How can I configure a search query to run everyday between 5am to 11 :30 am IST in splunk search query. I don't want to save it as a report but Im using this search in a dashboard and it has to run at a particular time daily. Please help. Thanks in advance.  
Hello Everyone, I am trying to create piechart for cache operation split(in percentage) for hit/miss/pass using the below query for the selected hostname:   index="my_index" openshift_container... See more...
Hello Everyone, I am trying to create piechart for cache operation split(in percentage) for hit/miss/pass using the below query for the selected hostname:   index="my_index" openshift_container_name="container" | eval description=case(handling == "hit","HIT", handling == "miss","MISS", handling == "pass","PASS") | search hostname="int-ie-yyp.grp" | addtotals | eval cache_hit=round(100*HIT/Total,1) | eval cache_miss=round(100*MISS/Total,1) | eval cache_pass=round(100*PASS/Total,1)   When I try with:   | stats values(cache_hit) as cacheHit values(cache_miss) as cacheMiss values(cache_pass) as cachePass by description    no data is generated. However when I try for count it works:   | stats count by description   Can someone please help.  
Hi  Need help.  I have a Splunk setup environment which is using Splunk version 8.2  with Cisco Firepower eStreamer service (Splunk Add-On) version 5 and  Splunk Add-on for Carbon Black 2.1 (latest... See more...
Hi  Need help.  I have a Splunk setup environment which is using Splunk version 8.2  with Cisco Firepower eStreamer service (Splunk Add-On) version 5 and  Splunk Add-on for Carbon Black 2.1 (latest) .  It working wrongly in Splunk ver 8 which is parse error , those event having parse error and unable to identify key fields for events.  Not sure what cause this or missing any setting.  I follow the guideline  from https://www.cisco.com/c/en/us/td/docs/security/firepower/670/api/eStreamer_enCore/eStreamereNcoreSpl... and Splunk doc  (invalid link) I have been through all articles in community and as well that similar to the error, but no luck.  Any advice on getting this working is much appreciated.  Thank you.    Below is the setup info.  Cisco Firepower eStreamer service (Splunk Add-On) version 5 Issue : CISCO Firepower parsing issue: Device Model: Cisco Firepower 1010 Firewall Collecting method: Syslog to Splunk HF > Indexer Splunk Add-on installed on both HF and SH: https://splunkbase.splunk.com/app/3662 (Latest Version) Splunk HF and SH Version: 8.2.1 Source Type: cisco:firepower:syslog Source Type configuration: Tried Auto and Regex as well   Splunk Add-on for Carbon Black 2.1 (latest)  Meantime, it happen same to the CarbonBlack bit9 json parsing issue: Issue : Multiple events were merged by Splunk hence failed to parse, some of the event are without any issue though. Checked raw logs has no different patterns and tried to save the raw logs to text file and upload manually, it works without any problem. Collecting method: UF > Indexer Splunk Add-on installed: https://splunkbase.splunk.com/app/2790 (Latest Version) Splunk HF and SH Version: 8.2.1 Source Type: bit9:carbonblack:json Source Type configuration: Tried Auto and Regex as well
While doing splunk API post call for KVStore one of the field [string] attribute size is large  so API call fails saying below statement String value too long. valueSize=526402, maxValueSize=5242... See more...
While doing splunk API post call for KVStore one of the field [string] attribute size is large  so API call fails saying below statement String value too long. valueSize=526402, maxValueSize=524288 Is there a way we can increase the field [string] length through configuration ? Please let me know your suggestions  
I am trying to use a Universal Forwarder to get a load of windows event logs that I need to analyse into Splunk. The event logs are from about 7 different systems and are all located on my local lapt... See more...
I am trying to use a Universal Forwarder to get a load of windows event logs that I need to analyse into Splunk. The event logs are from about 7 different systems and are all located on my local laptop in a folder.  I have tried adding the folder into the inputs.conf file and setting the sourcetype to WinEventLog, but once the data is in, the individual events are not being extracted. Rather the entire file is being passed as one event and all I can see are the headers for each Event Log.  Is someone able to help me with this please?  I should probably state that I am using a Splunk Cloud instance and do not have a deployment server - I need to go straight from my laptop to the Splunk Cloud instance.  Thanks
I ask this in Stackoverflow, but I couldn't get answer yet. This is my question. I want to change bar graph color based on token value. For example, when I get token name "fruit",I want to change ... See more...
I ask this in Stackoverflow, but I couldn't get answer yet. This is my question. I want to change bar graph color based on token value. For example, when I get token name "fruit",I want to change etire bar graph color depend on token like this. Apple: Red (0xfc4103) banana : Yellow(0xf5db36) grape : Purple(0xa536eb) (When the token data is "Apple", the entire graph color become Red-0xfc4103-) So I write down code using "charting.fieldColors" <panel> <chart> <search> <query> ....... | eval paint_chart = case("$fruit$"=="Apple","0xfc4103", "$fruit$"=="banana","0xf5db36", "$fruit$"=="grape","0xa536eb") | chart count by amount_DT | rename count AS "Number of fruit" ... <earliest>-10d@d</earliest> <latest>now</latest> </search> .... <option name="charting.fieldColors">{"Number of case" : $results.paint_chart$}</option> </chart> </panel>  But it's not working at all. How could I solve this problem?
Hello, I have a question regarding forwarding and receiving in Splunk. Can I configure the deployment client to send logs to another log collector in case the first one is not responding or receivi... See more...
Hello, I have a question regarding forwarding and receiving in Splunk. Can I configure the deployment client to send logs to another log collector in case the first one is not responding or receiving logs? To be more specific, is there any kind of configuration that can be done so the deployment client will automatically switch to another log collector if the first one isn't available? Thank you.
Hi Splunk Experts, I've a dashboard, where I have a base search and use the base search results in two different Panels to collect data to sourcetype, both panel query performs two extreme differe... See more...
Hi Splunk Experts, I've a dashboard, where I have a base search and use the base search results in two different Panels to collect data to sourcetype, both panel query performs two extreme different kind of operations. Currently I'm running them manually, but I want to run this in a scheduled mode. Is it possible, I thought of Saved Search, but I'm not sure whether that's the right solution. Could you please assist on better approach. Thanks in advance!!
Hi  Is it possible to restore archive data from one single host consider we have an index=windows ,we want to restore archive data only for one host ie;index=windows host=xxx . Is it possible s... See more...
Hi  Is it possible to restore archive data from one single host consider we have an index=windows ,we want to restore archive data only for one host ie;index=windows host=xxx . Is it possible someway?   Thanks in advance
Hi,  Can we use wildcard for service monitoring?  https://docs.splunk.com/Observability/gdi/monitors-hosts/win-services.html If yes, can you please provide some sample, thanks. 
Hi, We use SAP Cloud ALM for monitoring SAP SaaS based applications. Is there any way alerts raised in the tool can brought to event section of AppDynamics using HTTP requests or any other means. E... See more...
Hi, We use SAP Cloud ALM for monitoring SAP SaaS based applications. Is there any way alerts raised in the tool can brought to event section of AppDynamics using HTTP requests or any other means. External tool has a capability to communicate with other platforms using API's.
Hi, In my first search, I got all the details which needs to be displayed in the results but it doesn't have an IP field.. So, in my second search same index different category, has an IP fields a... See more...
Hi, In my first search, I got all the details which needs to be displayed in the results but it doesn't have an IP field.. So, in my second search same index different category, has an IP fields and I try to join it using the user field. Both searches when run separately has results, but when using join it is not working.     index=A category=Requiredevents | rename required.user as user | fields user category identity time | join type=left user [| search index=A category=Requiredevents2 required.user=* required.ipaddress=* | rename required.user as user required.ipaddress as ipaddress | fields user ipaddress] | table user category identity time ipaddress     I also tried using stats like this, but it didn't work   (index=A category=Requiredevents) OR (| search index=A category=Requiredevents2 ) | rename required.user as user required.ipaddress as ipaddress | fields user category identity ipaddress time | stats count user category identity time ipaddress   Any help would be appreciated, thank you
Hi, I am looking for a way to use SPL in Splunk Enterprise to create a tracing view similiar to what we can achieve using jager or zipkin. If we could ingest otel data into splunk index, then how can... See more...
Hi, I am looking for a way to use SPL in Splunk Enterprise to create a tracing view similiar to what we can achieve using jager or zipkin. If we could ingest otel data into splunk index, then how can be used for tracking the end to end flow. Anyone aware of any app that we could use or how will that SPL looks like? Is it a viable solution using just Splunk Enterprise?  
Greetings ! We've been using the Version 8.1.12 of the forwarder container for some time (years) and need to move to version 9.  I've not been successful in getting the new version running and not... See more...
Greetings ! We've been using the Version 8.1.12 of the forwarder container for some time (years) and need to move to version 9.  I've not been successful in getting the new version running and noted that the container is not  initializing and is unable to forward logs. Most recently employing   docker.io/splunk/universalforwarder:latest Digest: sha256:88fb1a2b8d4f47bea89b642973e6502940048010cd9ed288c713ac3c7d079a82 Our deployment is an unmodified image. The container launches but on closer inspection (by opening a shell into the container) I can see it's hanging on the splunk status command (from ps -ef): /opt/splunkforwarder/bin/splunk status --accept-license --answer-yes --no-prompt If I run the same command (as above), I can see that it prompts on the following: Perform migration and upgrade without previewing configuration changes? [y/n] Answering "y" seems to move things along and it responds (with lots more lines): "-- Migration information is being logged to '/opt/splunkforwarder/var/log/splunk/migration.log.2023-07-05.13-55-37' --" After this, I can manually start the splunk forwarder ! Is there "something" I can do so that it passes through this step without prompting?   Here's some background if it helps: We're using the same Azure Kubernetes service (AKS) 1.26.3 as before with Splunk forwarder 8.1 We're mapping in the following files: /opt/splunk/etc/auth/sunsuper/splunkclient.chain /opt/splunk/etc/auth/sunsuper/splunkclient.pem /opt/splunkforwarder/etc/system/local/outputs.conf /opt/splunkforwarder/etc/apps/ta-inspire/local/server.conf /opt/splunkforwarder/etc/apps/ta-inspire/local/inputs.conf and launching the container with the same following (yaml) environment:           env:             - name: TZ               value: Australia/Brisbane             - name: SPLUNK_START_ARGS               value: '--accept-license --answer-yes --no-prompt'             - name: SPLUNK_USER               value: root             - name: SPLUNK_FORWARD_SERVER               value: fwdhost.probably.com.au:9997             - name: SPLUNK_FORWARD_SERVER_ARGS               value: >-                 -ssl-cert-path /opt/splunk/etc/auth/sunsuper/splunkclient.pem                 -ssl-root-ca-path                 /opt/splunk/etc/auth/sunsuper/splunkclient.chain -ssl-password                 secret -ssl-common-name-to-check                 fwdhost.probably.com.au -ssl-verify-server-cert false -auth                 admin:secret             - name: ENVIRONMENT               value: UNIT             - name: SPLUNK_PASSWORD               value: secret             - name: SPLUNK_STANDALONE_URL               value: fwdhost.probably.com.au:9997 Many thanks, Nev