All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK. Got it. A run-anywhere search including mockup data | makeresults format=csv data="index,index1Id,curEventId,prevEventId,eventId,eventOrigin index1,23,11,13,, index1,34,12,14,, index1,35,12,... See more...
OK. Got it. A run-anywhere search including mockup data | makeresults format=csv data="index,index1Id,curEventId,prevEventId,eventId,eventOrigin index1,23,11,13,, index1,34,12,14,, index1,35,12,16,, index1,65,17,11,, index1,88,15,12,, index2,,,,11,1 index2,,,,12,2 index2,,,,13,3 index2,,,,14,4 index2,,,,15,5 index2,,,,16,6 index2,,,,17,7" ```This is just a mockup data preparation; now the fun begins``` ```We make two EventId fields from our original one (we can't use rename because we don't want to overwrite the values in the "joining" events with null values``` | eval curEventId=if(index="index1",curEventId,eventId) | eval prevEventId=if(index="index1",prevEventId,eventId) ```And now we "copy over" the values from "single side" results into the compound "both sides" result``` ```Be cautious about streamstats limitations``` | sort - index | fields - index | streamstats values(eventOrigin) AS curEventOrigin by curEventId | streamstats values(eventOrigin) AS prevEventOrigin by prevEventId ```We only need the combined results, not the partial ones``` | where isnotnull(index1Id) ```clear empty fields``` | fields - eventId eventOrigin  
Addon worked fine until upgrade to 9.3.1, exports to azure now is halted with an error message CRITICAL Could not connect to Azure Blob: NameError("name 'BlobServiceClient' is not defined") We've d... See more...
Addon worked fine until upgrade to 9.3.1, exports to azure now is halted with an error message CRITICAL Could not connect to Azure Blob: NameError("name 'BlobServiceClient' is not defined") We've deployed the latest 2.4.0 version available from splunkbase
You could filter for the errors, extract the customerid and count by customerid. Then determine the percentage of all the errors each customerid has and then alert if this percentage is greater than ... See more...
You could filter for the errors, extract the customerid and count by customerid. Then determine the percentage of all the errors each customerid has and then alert if this percentage is greater than a nominal value.
i have the same situation when i try to migration my splunk from old server RHEL6.9 to new RHEL8.8 i did rsync complete and created user splunk as new server ran rpm -i of same version to the new s... See more...
i have the same situation when i try to migration my splunk from old server RHEL6.9 to new RHEL8.8 i did rsync complete and created user splunk as new server ran rpm -i of same version to the new server and shows complete with zero error, however when i went to /opt/splunk/bin and ./splunk start it shows same error like you.   May i know any update of yours, did you fixed ?
Can you share some of the events you have?
OK. It seems to be CSRF-prevention cookie related. Try clearing your browser cache and cookies. Maybe your browser has some invalid cookie stored which it supplies with your requests.
Hi Team,   I am looking to create splunk app in which in setup page there will drop down which will ask for select splunk index. and with that index I want to update my savedsearches.config which... See more...
Hi Team,   I am looking to create splunk app in which in setup page there will drop down which will ask for select splunk index. and with that index I want to update my savedsearches.config which I am using to trigger alert. I have create this page like <form version="1.1" theme="light"> <label>App Setup</label> <fieldset submitButton="true"> <input type="dropdown" token="selected_index" searchWhenChanged="true"> <label>Select Index</label> <search> <query>| eventcount summarize=false index=* | dedup index | table index</query> </search> <default>ibm_defender</default> <fieldForLabel>index</fieldForLabel> <fieldForValue>index</fieldForValue> </input> </fieldset> <!-- Button Row --> <row> <button label="Submit"> <set token="form_submit">1</set> <redirect> <uri>/app/ibm_storage_defender/ibm_storage_defender_events</uri> </redirect> </button> </row> </form> but here submit button is not working setup page stay there on reload is working  also is my approach correct in my savesarches config I have configure query like #search = index="$selected_index$" source=mp-defender message="Potential threat detected" | eval rule_title="High severity alert for Potential threat events", urgency="High. Also please suggest if there is any better option for this       
1. Please, don't post links butchered by some external "protection" service. 2. You get this wrong Those articles don't describe splitting json events. They describe breaking input data stream so... See more...
1. Please, don't post links butchered by some external "protection" service. 2. You get this wrong Those articles don't describe splitting json events. They describe breaking input data stream so that it breaks on the "inner" json boundaries instead of the "outer" ones. It doesn't have anything to do with manipulating a single event already being broken from the input stream. It's siimilar to telling Splunk not to break the stream into lines but rather ingest something delimited by whitespaces separately. But your case is completely different because you want to carry over some common part (some common metadata I assume) from the outer json structure to each part extracted from the inner json array. This is way above the simple string-based manipulation that Splunk can do in the ingestion pipeline.
Why not use your search in a dashboard panel table?
It is not clear what your events look like although you have done a good job at describing the information in them. Please share some (anonymised) raw events (in a code block) so we can see what you ... See more...
It is not clear what your events look like although you have done a good job at describing the information in them. Please share some (anonymised) raw events (in a code block) so we can see what you are dealing with. Also, a representation of your desired output would be informative.
Yes, I tried fetching for only 1 day instead of 7 days but still the same issue. After clicking on "Create Table View" those fields are disappearing. 
Good morning fellow splunkers. I have a challenge and was wondering if anyone could help me. In some logs with multiple fields with the same label, we use eval mvindex to assign different label for ... See more...
Good morning fellow splunkers. I have a challenge and was wondering if anyone could help me. In some logs with multiple fields with the same label, we use eval mvindex to assign different label for those fields. For example, In a log, we have two fields labelled "Account Name", first one corresponding to computer account and second to user account. We use mvindex to assign labels appropriately. This works well for a known number of fields. Now, we also have logs, with groups of fields: action, module and rule:          action: quarantine          module: access          rule: verified              action: execute          module: access          rule: verified              action: continue          module: access          rule: verified                 action: reject          isFinal: true          module: pdr          rule: reject I would like to use mvindex to label those so I can use those fileds more easily. In the example above, we have four groups of those fileds, thefore I wold have: action1, action2 etc (same for module and rule). However, the number of groups changes. It could be one, two, three or more. Is there any way to use mvindex dynamically somehow? I imagine, we would have to first evaluate number of those fields (or group of fields) and then use mvindex to assign different labels? Unless there is a different way to achieve our goal. Many thnaks in advance for any advise. Kind Regards, Mike.
I apologize for the inconvenience here.  Unfortunately, Splunk has forced all apps to no longer expose via the UI the ability to accept insecure connections.   To keep our app certified and available... See more...
I apologize for the inconvenience here.  Unfortunately, Splunk has forced all apps to no longer expose via the UI the ability to accept insecure connections.   To keep our app certified and available on Splunkbase we had to remove this from the UI and move it out of inputs.conf into a separate file. 
Assuming that, like in the OP, index A still carries Hostname field that you want to compare with Reporting_Host in index B.  In addition, index A has a field "IP address".  This should get your desi... See more...
Assuming that, like in the OP, index A still carries Hostname field that you want to compare with Reporting_Host in index B.  In addition, index A has a field "IP address".  This should get your desired result.   index=A sourcetype="Any" | stats values("IP address") as "IP address" by Hostname OS | append [search index=B sourcetype="foo" | stats values(Reporting_Host) as Reporting_Host] | eventstats values(eval(lower(Reporting_Host))) as Reporting_Host | where index != "B" | mvexpand "IP address" | eval match = if(lower(Hostname) IN (Reporting_Host) OR 'IP address' IN (Reporting_Host), "ok", null()) | stats values("IP address") as "IP address" values(match) as match by Hostname OS | fillnull match value="missing"   Use the following emulation:   | makeresults format=csv data="Hostname, IP address, OS xyz, 190.1.1.1:101.2.2.2:102.3.3.3:4.3.2.1,Windows zbc, 100.0.1.0, Linux alb, 190.1.0.2, Windows cgf, 20.4.2.1, Windows bcn, 20.5.3.4:30.4.6.1, Solaris" | eval "IP address" = split('IP address', ":") | eval index = "A" | append [makeresults format=csv data="Reporting_Host zbc 30.4.6.1 alb 101.2.2.2" | eval index = "B"] ``` the above emulates index=A sourcetype="Any" | stats values("IP address") as "IP address" by Hostname OS | append [search index=B sourcetype="foo" | stats values(Reporting_Host) as Reporting_Host] ```   the result is Hostname OS IP address match alb Windows 190.1.0.2 ok bcn Solaris 20.5.3.4 30.4.6.1 ok cgf Windows 20.4.2.1 missing xyz Windows 101.2.2.2 102.3.3.3 190.1.1.1 4.3.2.1 ok zbc Linux 100.0.1.0 ok
Hello, it should be port 8088 in your script, however UI won't work, for the HEC.  Try to sending the data to HEC via Postman or curl, if that works, then it should be an issue on the payload data s... See more...
Hello, it should be port 8088 in your script, however UI won't work, for the HEC.  Try to sending the data to HEC via Postman or curl, if that works, then it should be an issue on the payload data source. For troubleshooting: use the below search for your hec logs. index=_introspection component=HttpEventCollector sourcetype=http_event_collector_metrics index=_internal host=yourhechost ERROR   Last thing try to use the services/collector/raw endpoint to test, but keep in mind to use services/collector/event for your json data. Hope this helps.   
Hi, Is there a way to check the Splunk cloud timezone? I know by documentation its at GMT+0 and displays the data based on your configured timezone My user account is configured at GMT+8, howev... See more...
Hi, Is there a way to check the Splunk cloud timezone? I know by documentation its at GMT+0 and displays the data based on your configured timezone My user account is configured at GMT+8, however when I check the triggered alerts page. the Alerts have a CST timezone Also, in our ES incident review checking the time difference from the triggering event from the triggered alerts, it almost at 2 hrs. For reference see below.    Triggered alert in Incident review Highlighted refers to the timestamp in the triggering event from the drill down search    
Hi @sainag_splunk , I am trying to open the end point on browser but getting below error. Regards, Eshwar
Splunk has warning log: WARN AggregatorMiningProcessor [10530 merging] - Breaking event because limit of 256 has been exceeded ... data_sourcetype="my_json" The "my_json" for UF is: [my_json] DAT... See more...
Splunk has warning log: WARN AggregatorMiningProcessor [10530 merging] - Breaking event because limit of 256 has been exceeded ... data_sourcetype="my_json" The "my_json" for UF is: [my_json] DATETIME_CONFIG = KV_MODE = json LINE_BREAKER = (?:,)([\r\n]+) NO_BINARY_CHECK = true TIMESTAMP_FIELDS = _time TIME_FORMAT = %2Y%m%d%H%M%S TRUNCATE = 0 category = Structured description = my json type without truncate disabled = false pulldown_type = 1 MAX_EVENTS = 2500 BREAK_ONLY_BEFORE_DATE = true   The data has about 5000 Lines, sample is the below: { "Versions" : { "sample_version" : "version.json", "name" : "my_json", "revision" : "rev2.0"}, "Domains" : [{ "reset_domain_name" : "RESET_DOMAIN", "domain_number" : 2, "data_fields" : ["Namespaces/data1", "Namespaces/data2"] } ], "log" : ["1 ERROR No such directory and file", "2 ERROR No such directory and file", "3 ERROR No such directory and file", "4 ERROR No such directory and file" ], "address" : [{ "index": 1, "addr": "0xFFFFFF"} ], "fail_reason" : [{ "reason" : "SystemError", "count" : 5}, { "reason" : "RuntimeError", "count" : 0}, { "reason" : "ValueError", "count" : 1} ], ... blahblah ... "comment" : "None"} How to fix this warning log? We add "MAX_EVENTS" field in props.conf, but it does not working.
Hi @sainag_splunk , Yes, I had configured with token, index. Below is my configuration in HEC and OTEL exporter.   Please suggest where went wrong? Regards, Eshwar