All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, How do we check for armis app alert logs in cloud, recently We have updated the app so how we can  check for the logs?
I have 2 queries and joining it with "Join" using the common field "SessionID". With  the below query I'm just getting the results if there are results from both the search. If there is no result ... See more...
I have 2 queries and joining it with "Join" using the common field "SessionID". With  the below query I'm just getting the results if there are results from both the search. If there is no result for either the parent search or the sub search the result is not getting printed. For example if there is no LogoutTime available from the sub search, the results of parent search is not getting printed and. Is there any way to achieve the desired result. index = test "testrequest" | rex "(?:.+email\=)(?<Email>[a-zA-Z0-9_\-\@\.]+)" | rex "(?:.+trasactionId\=)(?<TransactionID>[a-zA-Z0-9-]+)" | rex "(?:.+TransactionTime\=)(?<LoginTime>[a-zA-Z0-9\s:]+EDT)" | rex "(?:.+Status\=)(?<Status>\w+)" | rex "(?:.+TimeTaken\=)(?<TimeTaken>\d+)" | rex "(?:.+\+\+)(?<SessionID>[a-zA-Z0-9-_:@.]+)(?:\:Status)" | table Email,TransactionID,LoginTime,Status,TimeTaken,SessionID | join SessionID [search index = test "testrespone" | rex "(?:.+TransactionTime\=)(?<LogoutTime>[a-zA-Z0-9\s:]+EDT)" | rex "(?:.+SessionId\=)(?<SessionID>[a-zA-Z0-9-_:@.]+)(?:\:Status)" | table SessionID,LogoutTime] | table Email,TransactionID,LoginTime,Status,TimeTaken,SessionID,LogoutTime
Hi, How can I configure a search query to run everyday between 5am to 11 :30 am IST in splunk search query. I don't want to save it as a report but Im using this search in a dashboard and it has ... See more...
Hi, How can I configure a search query to run everyday between 5am to 11 :30 am IST in splunk search query. I don't want to save it as a report but Im using this search in a dashboard and it has to run at a particular time daily. Please help. Thanks in advance.  
Hello Everyone, I am trying to create piechart for cache operation split(in percentage) for hit/miss/pass using the below query for the selected hostname:   index="my_index" openshift_container... See more...
Hello Everyone, I am trying to create piechart for cache operation split(in percentage) for hit/miss/pass using the below query for the selected hostname:   index="my_index" openshift_container_name="container" | eval description=case(handling == "hit","HIT", handling == "miss","MISS", handling == "pass","PASS") | search hostname="int-ie-yyp.grp" | addtotals | eval cache_hit=round(100*HIT/Total,1) | eval cache_miss=round(100*MISS/Total,1) | eval cache_pass=round(100*PASS/Total,1)   When I try with:   | stats values(cache_hit) as cacheHit values(cache_miss) as cacheMiss values(cache_pass) as cachePass by description    no data is generated. However when I try for count it works:   | stats count by description   Can someone please help.  
Hi  Need help.  I have a Splunk setup environment which is using Splunk version 8.2  with Cisco Firepower eStreamer service (Splunk Add-On) version 5 and  Splunk Add-on for Carbon Black 2.1 (latest... See more...
Hi  Need help.  I have a Splunk setup environment which is using Splunk version 8.2  with Cisco Firepower eStreamer service (Splunk Add-On) version 5 and  Splunk Add-on for Carbon Black 2.1 (latest) .  It working wrongly in Splunk ver 8 which is parse error , those event having parse error and unable to identify key fields for events.  Not sure what cause this or missing any setting.  I follow the guideline  from https://www.cisco.com/c/en/us/td/docs/security/firepower/670/api/eStreamer_enCore/eStreamereNcoreSpl... and Splunk doc  (invalid link) I have been through all articles in community and as well that similar to the error, but no luck.  Any advice on getting this working is much appreciated.  Thank you.    Below is the setup info.  Cisco Firepower eStreamer service (Splunk Add-On) version 5 Issue : CISCO Firepower parsing issue: Device Model: Cisco Firepower 1010 Firewall Collecting method: Syslog to Splunk HF > Indexer Splunk Add-on installed on both HF and SH: https://splunkbase.splunk.com/app/3662 (Latest Version) Splunk HF and SH Version: 8.2.1 Source Type: cisco:firepower:syslog Source Type configuration: Tried Auto and Regex as well   Splunk Add-on for Carbon Black 2.1 (latest)  Meantime, it happen same to the CarbonBlack bit9 json parsing issue: Issue : Multiple events were merged by Splunk hence failed to parse, some of the event are without any issue though. Checked raw logs has no different patterns and tried to save the raw logs to text file and upload manually, it works without any problem. Collecting method: UF > Indexer Splunk Add-on installed: https://splunkbase.splunk.com/app/2790 (Latest Version) Splunk HF and SH Version: 8.2.1 Source Type: bit9:carbonblack:json Source Type configuration: Tried Auto and Regex as well
While doing splunk API post call for KVStore one of the field [string] attribute size is large  so API call fails saying below statement String value too long. valueSize=526402, maxValueSize=5242... See more...
While doing splunk API post call for KVStore one of the field [string] attribute size is large  so API call fails saying below statement String value too long. valueSize=526402, maxValueSize=524288 Is there a way we can increase the field [string] length through configuration ? Please let me know your suggestions  
I am trying to use a Universal Forwarder to get a load of windows event logs that I need to analyse into Splunk. The event logs are from about 7 different systems and are all located on my local lapt... See more...
I am trying to use a Universal Forwarder to get a load of windows event logs that I need to analyse into Splunk. The event logs are from about 7 different systems and are all located on my local laptop in a folder.  I have tried adding the folder into the inputs.conf file and setting the sourcetype to WinEventLog, but once the data is in, the individual events are not being extracted. Rather the entire file is being passed as one event and all I can see are the headers for each Event Log.  Is someone able to help me with this please?  I should probably state that I am using a Splunk Cloud instance and do not have a deployment server - I need to go straight from my laptop to the Splunk Cloud instance.  Thanks
I ask this in Stackoverflow, but I couldn't get answer yet. This is my question. I want to change bar graph color based on token value. For example, when I get token name "fruit",I want to change ... See more...
I ask this in Stackoverflow, but I couldn't get answer yet. This is my question. I want to change bar graph color based on token value. For example, when I get token name "fruit",I want to change etire bar graph color depend on token like this. Apple: Red (0xfc4103) banana : Yellow(0xf5db36) grape : Purple(0xa536eb) (When the token data is "Apple", the entire graph color become Red-0xfc4103-) So I write down code using "charting.fieldColors" <panel> <chart> <search> <query> ....... | eval paint_chart = case("$fruit$"=="Apple","0xfc4103", "$fruit$"=="banana","0xf5db36", "$fruit$"=="grape","0xa536eb") | chart count by amount_DT | rename count AS "Number of fruit" ... <earliest>-10d@d</earliest> <latest>now</latest> </search> .... <option name="charting.fieldColors">{"Number of case" : $results.paint_chart$}</option> </chart> </panel>  But it's not working at all. How could I solve this problem?
Hello, I have a question regarding forwarding and receiving in Splunk. Can I configure the deployment client to send logs to another log collector in case the first one is not responding or receivi... See more...
Hello, I have a question regarding forwarding and receiving in Splunk. Can I configure the deployment client to send logs to another log collector in case the first one is not responding or receiving logs? To be more specific, is there any kind of configuration that can be done so the deployment client will automatically switch to another log collector if the first one isn't available? Thank you.
Hi Splunk Experts, I've a dashboard, where I have a base search and use the base search results in two different Panels to collect data to sourcetype, both panel query performs two extreme differe... See more...
Hi Splunk Experts, I've a dashboard, where I have a base search and use the base search results in two different Panels to collect data to sourcetype, both panel query performs two extreme different kind of operations. Currently I'm running them manually, but I want to run this in a scheduled mode. Is it possible, I thought of Saved Search, but I'm not sure whether that's the right solution. Could you please assist on better approach. Thanks in advance!!
Hi  Is it possible to restore archive data from one single host consider we have an index=windows ,we want to restore archive data only for one host ie;index=windows host=xxx . Is it possible s... See more...
Hi  Is it possible to restore archive data from one single host consider we have an index=windows ,we want to restore archive data only for one host ie;index=windows host=xxx . Is it possible someway?   Thanks in advance
Hi,  Can we use wildcard for service monitoring?  https://docs.splunk.com/Observability/gdi/monitors-hosts/win-services.html If yes, can you please provide some sample, thanks. 
Hi, We use SAP Cloud ALM for monitoring SAP SaaS based applications. Is there any way alerts raised in the tool can brought to event section of AppDynamics using HTTP requests or any other means. E... See more...
Hi, We use SAP Cloud ALM for monitoring SAP SaaS based applications. Is there any way alerts raised in the tool can brought to event section of AppDynamics using HTTP requests or any other means. External tool has a capability to communicate with other platforms using API's.
Hi, In my first search, I got all the details which needs to be displayed in the results but it doesn't have an IP field.. So, in my second search same index different category, has an IP fields a... See more...
Hi, In my first search, I got all the details which needs to be displayed in the results but it doesn't have an IP field.. So, in my second search same index different category, has an IP fields and I try to join it using the user field. Both searches when run separately has results, but when using join it is not working.     index=A category=Requiredevents | rename required.user as user | fields user category identity time | join type=left user [| search index=A category=Requiredevents2 required.user=* required.ipaddress=* | rename required.user as user required.ipaddress as ipaddress | fields user ipaddress] | table user category identity time ipaddress     I also tried using stats like this, but it didn't work   (index=A category=Requiredevents) OR (| search index=A category=Requiredevents2 ) | rename required.user as user required.ipaddress as ipaddress | fields user category identity ipaddress time | stats count user category identity time ipaddress   Any help would be appreciated, thank you
Hi, I am looking for a way to use SPL in Splunk Enterprise to create a tracing view similiar to what we can achieve using jager or zipkin. If we could ingest otel data into splunk index, then how can... See more...
Hi, I am looking for a way to use SPL in Splunk Enterprise to create a tracing view similiar to what we can achieve using jager or zipkin. If we could ingest otel data into splunk index, then how can be used for tracking the end to end flow. Anyone aware of any app that we could use or how will that SPL looks like? Is it a viable solution using just Splunk Enterprise?  
Greetings ! We've been using the Version 8.1.12 of the forwarder container for some time (years) and need to move to version 9.  I've not been successful in getting the new version running and not... See more...
Greetings ! We've been using the Version 8.1.12 of the forwarder container for some time (years) and need to move to version 9.  I've not been successful in getting the new version running and noted that the container is not  initializing and is unable to forward logs. Most recently employing   docker.io/splunk/universalforwarder:latest Digest: sha256:88fb1a2b8d4f47bea89b642973e6502940048010cd9ed288c713ac3c7d079a82 Our deployment is an unmodified image. The container launches but on closer inspection (by opening a shell into the container) I can see it's hanging on the splunk status command (from ps -ef): /opt/splunkforwarder/bin/splunk status --accept-license --answer-yes --no-prompt If I run the same command (as above), I can see that it prompts on the following: Perform migration and upgrade without previewing configuration changes? [y/n] Answering "y" seems to move things along and it responds (with lots more lines): "-- Migration information is being logged to '/opt/splunkforwarder/var/log/splunk/migration.log.2023-07-05.13-55-37' --" After this, I can manually start the splunk forwarder ! Is there "something" I can do so that it passes through this step without prompting?   Here's some background if it helps: We're using the same Azure Kubernetes service (AKS) 1.26.3 as before with Splunk forwarder 8.1 We're mapping in the following files: /opt/splunk/etc/auth/sunsuper/splunkclient.chain /opt/splunk/etc/auth/sunsuper/splunkclient.pem /opt/splunkforwarder/etc/system/local/outputs.conf /opt/splunkforwarder/etc/apps/ta-inspire/local/server.conf /opt/splunkforwarder/etc/apps/ta-inspire/local/inputs.conf and launching the container with the same following (yaml) environment:           env:             - name: TZ               value: Australia/Brisbane             - name: SPLUNK_START_ARGS               value: '--accept-license --answer-yes --no-prompt'             - name: SPLUNK_USER               value: root             - name: SPLUNK_FORWARD_SERVER               value: fwdhost.probably.com.au:9997             - name: SPLUNK_FORWARD_SERVER_ARGS               value: >-                 -ssl-cert-path /opt/splunk/etc/auth/sunsuper/splunkclient.pem                 -ssl-root-ca-path                 /opt/splunk/etc/auth/sunsuper/splunkclient.chain -ssl-password                 secret -ssl-common-name-to-check                 fwdhost.probably.com.au -ssl-verify-server-cert false -auth                 admin:secret             - name: ENVIRONMENT               value: UNIT             - name: SPLUNK_PASSWORD               value: secret             - name: SPLUNK_STANDALONE_URL               value: fwdhost.probably.com.au:9997 Many thanks, Nev
Hello, I am currently trying to read emails from a Microsoft 365 Business account, but unfortunately, I haven't been able to find a suitable plugin within Splunk that would allow me to accomplish thi... See more...
Hello, I am currently trying to read emails from a Microsoft 365 Business account, but unfortunately, I haven't been able to find a suitable plugin within Splunk that would allow me to accomplish this task. Has anyone been able to successfully carry out this process? I would greatly appreciate any help you can provide.
I would like to manually import AWS Cloudtrail logs which were stored as GZipped JSON Files on S3. Those files reside on my local disk, one file per hour, per day, etc. They should not be imported di... See more...
I would like to manually import AWS Cloudtrail logs which were stored as GZipped JSON Files on S3. Those files reside on my local disk, one file per hour, per day, etc. They should not be imported directly from the Cloud, e.g. via the Splunk TA for AWS. I have installed that app though to get all the various AWS specific sourcetypes. The problem with the data is, that those files contain only a single line of data which is a huge JSON array containing all the individual events. This is apparently not too seldom, so I am refering to AWS Cloudtrail only for the sake of providing an example of this format. Small artificially contrived example how such a file looks like, here 3 events.     {"Records":[{"eventVersion":"1.08","eventTime":"2022-06-08T22:10:01Z","userIdentity":{"type":"AssumedRole"}},{"eventVersion":"1.08","eventTime":"2022-06-08T22:10:03Z","userIdentity":{"type":"AssumedRole"}},{"eventVersion":"1.08","eventTime":"2022-06-08T22:10:05Z","userIdentity":{"type":"AssumedRole"}}]}     Of course the real Cloudtrail events are much more talkative and layered, but all this does not pose a problem for that case here. Selecting the Sourcetype " aws:cloudtrail"  does not properly split the events. I changed the LINE_BREAKER to the following value: LINE_BREAKER=((\{"Records":\[)*|,*){"eventVersion" Using this I was able to properly index all the events and even get rid of the header upfront. However, the very last event still gets wrongly indexed, as it ends with the closing " ]}" from the opening/wrapping "Records"  element and as such it's not proper JSON. How can I get rid of that trailing "junk" "]}" so also the last event gets properly indexed? 
In a simple XML dashboard is there a way to get the index of a clicked mv field in a dashboard drilldown? For instance, if I click on 5 in the count field of row 1 I would like to also get the user b... See more...
In a simple XML dashboard is there a way to get the index of a clicked mv field in a dashboard drilldown? For instance, if I click on 5 in the count field of row 1 I would like to also get the user bob into a token. If I know the index of the count that I clicked on I could use $row.user$ and then split it by comma and keep the one with the same index as the count that was clicked. Or maybe there is an even better way to do this ... I'm all ears. host user time count host1 bob joe 2023-07-05 07:36:51 2023-07-05 07:49:32 5 4 host2 steve john 2023-07-05 09:22:02 2023-07-05 06:14:12 4 4   I suppose I could do something like combining values in a field and parsing it on click like the below examples. However, if possible, I would like to have them in separate fields as I think it looks nicer. host user time count host1 bob joe 2023-07-05 07:36:51 2023-07-05 07:49:32 5 {bob} 4 {joe} host2 steve john 2023-07-05 09:22:02 2023-07-05 06:14:12 4 {steve} 4 {john} OR host user | time | count host1 bob | 2023-07-05 07:36:51 | 5 joe | 2023-07-05 07:49:32 | 4 host2 steve | 2023-07-05 09:22:02 | 4 john | 2023-07-05 06:14:12 | 4
I have been attempting to contact support for several weeks now. I just need a license reset due to indexing too much data and to get a quote for increasing our current license to index more data. I'... See more...
I have been attempting to contact support for several weeks now. I just need a license reset due to indexing too much data and to get a quote for increasing our current license to index more data. I've tried calling every day but no answer, and submitting a case page keeps not going through whenever I hit submit. How am I supposed to get support when nothing works?