All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Can you share some of the events you have?
OK. It seems to be CSRF-prevention cookie related. Try clearing your browser cache and cookies. Maybe your browser has some invalid cookie stored which it supplies with your requests.
Hi Team,   I am looking to create splunk app in which in setup page there will drop down which will ask for select splunk index. and with that index I want to update my savedsearches.config which... See more...
Hi Team,   I am looking to create splunk app in which in setup page there will drop down which will ask for select splunk index. and with that index I want to update my savedsearches.config which I am using to trigger alert. I have create this page like <form version="1.1" theme="light"> <label>App Setup</label> <fieldset submitButton="true"> <input type="dropdown" token="selected_index" searchWhenChanged="true"> <label>Select Index</label> <search> <query>| eventcount summarize=false index=* | dedup index | table index</query> </search> <default>ibm_defender</default> <fieldForLabel>index</fieldForLabel> <fieldForValue>index</fieldForValue> </input> </fieldset> <!-- Button Row --> <row> <button label="Submit"> <set token="form_submit">1</set> <redirect> <uri>/app/ibm_storage_defender/ibm_storage_defender_events</uri> </redirect> </button> </row> </form> but here submit button is not working setup page stay there on reload is working  also is my approach correct in my savesarches config I have configure query like #search = index="$selected_index$" source=mp-defender message="Potential threat detected" | eval rule_title="High severity alert for Potential threat events", urgency="High. Also please suggest if there is any better option for this       
1. Please, don't post links butchered by some external "protection" service. 2. You get this wrong Those articles don't describe splitting json events. They describe breaking input data stream so... See more...
1. Please, don't post links butchered by some external "protection" service. 2. You get this wrong Those articles don't describe splitting json events. They describe breaking input data stream so that it breaks on the "inner" json boundaries instead of the "outer" ones. It doesn't have anything to do with manipulating a single event already being broken from the input stream. It's siimilar to telling Splunk not to break the stream into lines but rather ingest something delimited by whitespaces separately. But your case is completely different because you want to carry over some common part (some common metadata I assume) from the outer json structure to each part extracted from the inner json array. This is way above the simple string-based manipulation that Splunk can do in the ingestion pipeline.
Why not use your search in a dashboard panel table?
It is not clear what your events look like although you have done a good job at describing the information in them. Please share some (anonymised) raw events (in a code block) so we can see what you ... See more...
It is not clear what your events look like although you have done a good job at describing the information in them. Please share some (anonymised) raw events (in a code block) so we can see what you are dealing with. Also, a representation of your desired output would be informative.
Yes, I tried fetching for only 1 day instead of 7 days but still the same issue. After clicking on "Create Table View" those fields are disappearing. 
Good morning fellow splunkers. I have a challenge and was wondering if anyone could help me. In some logs with multiple fields with the same label, we use eval mvindex to assign different label for ... See more...
Good morning fellow splunkers. I have a challenge and was wondering if anyone could help me. In some logs with multiple fields with the same label, we use eval mvindex to assign different label for those fields. For example, In a log, we have two fields labelled "Account Name", first one corresponding to computer account and second to user account. We use mvindex to assign labels appropriately. This works well for a known number of fields. Now, we also have logs, with groups of fields: action, module and rule:          action: quarantine          module: access          rule: verified              action: execute          module: access          rule: verified              action: continue          module: access          rule: verified                 action: reject          isFinal: true          module: pdr          rule: reject I would like to use mvindex to label those so I can use those fileds more easily. In the example above, we have four groups of those fileds, thefore I wold have: action1, action2 etc (same for module and rule). However, the number of groups changes. It could be one, two, three or more. Is there any way to use mvindex dynamically somehow? I imagine, we would have to first evaluate number of those fields (or group of fields) and then use mvindex to assign different labels? Unless there is a different way to achieve our goal. Many thnaks in advance for any advise. Kind Regards, Mike.
I apologize for the inconvenience here.  Unfortunately, Splunk has forced all apps to no longer expose via the UI the ability to accept insecure connections.   To keep our app certified and available... See more...
I apologize for the inconvenience here.  Unfortunately, Splunk has forced all apps to no longer expose via the UI the ability to accept insecure connections.   To keep our app certified and available on Splunkbase we had to remove this from the UI and move it out of inputs.conf into a separate file. 
Assuming that, like in the OP, index A still carries Hostname field that you want to compare with Reporting_Host in index B.  In addition, index A has a field "IP address".  This should get your desi... See more...
Assuming that, like in the OP, index A still carries Hostname field that you want to compare with Reporting_Host in index B.  In addition, index A has a field "IP address".  This should get your desired result.   index=A sourcetype="Any" | stats values("IP address") as "IP address" by Hostname OS | append [search index=B sourcetype="foo" | stats values(Reporting_Host) as Reporting_Host] | eventstats values(eval(lower(Reporting_Host))) as Reporting_Host | where index != "B" | mvexpand "IP address" | eval match = if(lower(Hostname) IN (Reporting_Host) OR 'IP address' IN (Reporting_Host), "ok", null()) | stats values("IP address") as "IP address" values(match) as match by Hostname OS | fillnull match value="missing"   Use the following emulation:   | makeresults format=csv data="Hostname, IP address, OS xyz, 190.1.1.1:101.2.2.2:102.3.3.3:4.3.2.1,Windows zbc, 100.0.1.0, Linux alb, 190.1.0.2, Windows cgf, 20.4.2.1, Windows bcn, 20.5.3.4:30.4.6.1, Solaris" | eval "IP address" = split('IP address', ":") | eval index = "A" | append [makeresults format=csv data="Reporting_Host zbc 30.4.6.1 alb 101.2.2.2" | eval index = "B"] ``` the above emulates index=A sourcetype="Any" | stats values("IP address") as "IP address" by Hostname OS | append [search index=B sourcetype="foo" | stats values(Reporting_Host) as Reporting_Host] ```   the result is Hostname OS IP address match alb Windows 190.1.0.2 ok bcn Solaris 20.5.3.4 30.4.6.1 ok cgf Windows 20.4.2.1 missing xyz Windows 101.2.2.2 102.3.3.3 190.1.1.1 4.3.2.1 ok zbc Linux 100.0.1.0 ok
Hello, it should be port 8088 in your script, however UI won't work, for the HEC.  Try to sending the data to HEC via Postman or curl, if that works, then it should be an issue on the payload data s... See more...
Hello, it should be port 8088 in your script, however UI won't work, for the HEC.  Try to sending the data to HEC via Postman or curl, if that works, then it should be an issue on the payload data source. For troubleshooting: use the below search for your hec logs. index=_introspection component=HttpEventCollector sourcetype=http_event_collector_metrics index=_internal host=yourhechost ERROR   Last thing try to use the services/collector/raw endpoint to test, but keep in mind to use services/collector/event for your json data. Hope this helps.   
Hi, Is there a way to check the Splunk cloud timezone? I know by documentation its at GMT+0 and displays the data based on your configured timezone My user account is configured at GMT+8, howev... See more...
Hi, Is there a way to check the Splunk cloud timezone? I know by documentation its at GMT+0 and displays the data based on your configured timezone My user account is configured at GMT+8, however when I check the triggered alerts page. the Alerts have a CST timezone Also, in our ES incident review checking the time difference from the triggering event from the triggered alerts, it almost at 2 hrs. For reference see below.    Triggered alert in Incident review Highlighted refers to the timestamp in the triggering event from the drill down search    
Hi @sainag_splunk , I am trying to open the end point on browser but getting below error. Regards, Eshwar
Splunk has warning log: WARN AggregatorMiningProcessor [10530 merging] - Breaking event because limit of 256 has been exceeded ... data_sourcetype="my_json" The "my_json" for UF is: [my_json] DAT... See more...
Splunk has warning log: WARN AggregatorMiningProcessor [10530 merging] - Breaking event because limit of 256 has been exceeded ... data_sourcetype="my_json" The "my_json" for UF is: [my_json] DATETIME_CONFIG = KV_MODE = json LINE_BREAKER = (?:,)([\r\n]+) NO_BINARY_CHECK = true TIMESTAMP_FIELDS = _time TIME_FORMAT = %2Y%m%d%H%M%S TRUNCATE = 0 category = Structured description = my json type without truncate disabled = false pulldown_type = 1 MAX_EVENTS = 2500 BREAK_ONLY_BEFORE_DATE = true   The data has about 5000 Lines, sample is the below: { "Versions" : { "sample_version" : "version.json", "name" : "my_json", "revision" : "rev2.0"}, "Domains" : [{ "reset_domain_name" : "RESET_DOMAIN", "domain_number" : 2, "data_fields" : ["Namespaces/data1", "Namespaces/data2"] } ], "log" : ["1 ERROR No such directory and file", "2 ERROR No such directory and file", "3 ERROR No such directory and file", "4 ERROR No such directory and file" ], "address" : [{ "index": 1, "addr": "0xFFFFFF"} ], "fail_reason" : [{ "reason" : "SystemError", "count" : 5}, { "reason" : "RuntimeError", "count" : 0}, { "reason" : "ValueError", "count" : 1} ], ... blahblah ... "comment" : "None"} How to fix this warning log? We add "MAX_EVENTS" field in props.conf, but it does not working.
Hi @sainag_splunk , Yes, I had configured with token, index. Below is my configuration in HEC and OTEL exporter.   Please suggest where went wrong? Regards, Eshwar
If you already have HEC setup with the token, index. You should be good on the splunk indexing side.  You will need to use HEC exporter. HEC exporter: https://github.com/open-telemetry/opentelemetr... See more...
If you already have HEC setup with the token, index. You should be good on the splunk indexing side.  You will need to use HEC exporter. HEC exporter: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/splunkhecexporter/README.md Refer:  https://github.com/signalfx/splunk-otel-collector/tree/main/examples/otel-logs-splunk   Hope all these links help. 
Hi @sainag_splunk , Thank you for your response. Just for your info I had installed HEX on on-prem not on Kubernetes. I think that command you have shared is for Kubernetes environment. My goal is... See more...
Hi @sainag_splunk , Thank you for your response. Just for your info I had installed HEX on on-prem not on Kubernetes. I think that command you have shared is for Kubernetes environment. My goal is to achieve sending log data through Otel collector to HEC end point.
Why is it that neither of the Splunk.com site dashboard examples return data for the following query:  index=main sourcetype=access_combined* status=200 action=purchase |timechart count by productid... See more...
Why is it that neither of the Splunk.com site dashboard examples return data for the following query:  index=main sourcetype=access_combined* status=200 action=purchase |timechart count by productid ? Here's what the videos say we should get: But here's what the query returns: It groups by date successfully, but doesn't yield results by product. Both of the online dashboard creation videos in the url below yield the desired results shown in the first screenshot above.   Note:  the source="tutorialdata.zip:*". Two video training sites are here: https://www.splunk.com/en_us/training/videos/all-videos.html https://www.splunk.com/en_us/blog/learn/splunk-tutorials.html#education
Is there a way to create a detector to alert if a particular user (based on a part of the URL) is experiencing a higher number of errors? For example, if I have a /user/{customerId}/do-something URL... See more...
Is there a way to create a detector to alert if a particular user (based on a part of the URL) is experiencing a higher number of errors? For example, if I have a /user/{customerId}/do-something URL, then I want to be alerted when a particular {customerId} has a high number of errors within a specific time period. If there's a higher number of errors but they're mostly for different {customerId} values, then I don't want a notification. Thanks.
Assuming you already have a token $City_tok$ from the input, mvexpand is the most traditional way to do it | spath path=DataCenters{} | mvexpand DataCenters{} | spath input=DataCenters{} | whree Cit... See more...
Assuming you already have a token $City_tok$ from the input, mvexpand is the most traditional way to do it | spath path=DataCenters{} | mvexpand DataCenters{} | spath input=DataCenters{} | whree City == "$City_tok$" If mvexpand is a problem in your environment, there are other ways.