All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@kranthimutyala2 I am a volunteer here, as are most of those providing answers. Please don't tag me in your posts. If I have time and have something to contribute, I will try to help. But I will choo... See more...
@kranthimutyala2 I am a volunteer here, as are most of those providing answers. Please don't tag me in your posts. If I have time and have something to contribute, I will try to help. But I will choose which posts to answer and when.
What is the data source for that table? The JSON you have shared does not appear to cover that
| untable Component Level count | eval Component_Level=Component."_".Level | table Component_Level count | transpose 0 header_field=Component_Level | fields - column
This is the problem, I don't how this works... but I want to use the data that appears on the table in the bottom:   
Which search are you trying to extend - if it is "mttrSearch", you would do something like this "dataSources": { "dsQueryCounterSearch1": { "options": { "exte... See more...
Which search are you trying to extend - if it is "mttrSearch", you would do something like this "dataSources": { "dsQueryCounterSearch1": { "options": { "extend": "mttrSearch", "query": "| where AlertSource = AWS and AlertSeverity IN (6,5,4,3,1) | dedup Identifier | stats count as AWS", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" },
Hi Splunk, I have a table like below Component Green Amber Red Resp_time 0 200 400 5xx 0 50 100 4xx 0 50 100   I want to combine them to produce single row like below Resp_time_Green  Resp_ti... See more...
Hi Splunk, I have a table like below Component Green Amber Red Resp_time 0 200 400 5xx 0 50 100 4xx 0 50 100   I want to combine them to produce single row like below Resp_time_Green  Resp_time_Amber Resp_time_Red 5xx_Green 5xx_Amber 5xx_Red 4xx_Green 4xx_Amber 4xx_Red 0 200 400 0 50 100 0 50 100
Since you are using joins, you could be hitting limits on the subsearches - have you tried a shorter timeframe?
Hi Team, I have the below JSON field in the splunk event [{"sourceAccountId":"sourceAccountId_1","Remarks":"Successfully Migrated","recordStatus":"Success","RecordID":"RecordID_1","destinationAc... See more...
Hi Team, I have the below JSON field in the splunk event [{"sourceAccountId":"sourceAccountId_1","Remarks":"Successfully Migrated","recordStatus":"Success","RecordID":"RecordID_1","destinationAccountId":"destinationAccountId_1","defaultOwnerId":"defaultOwnerId_1"},{"sourceAccountId":"sourceAccountId_1","Remarks":"Successfully Migrated","recordStatus":"Success","RecordID":"RecordID_2","destinationAccountId":"destinationAccountId_1","defaultOwnerId":"defaultOwnerId_1"},{"sourceAccountId":"sourceAccountId_1","Remarks":"Successfully Migrated","recordStatus":"Success","RecordID":"RecordID_3","destinationAccountId":"destinationAccountId_1","defaultOwnerId":"defaultOwnerId_1"}]  just for eg: I have added 3 entries but In real we have more than 200 records in the single event in this field When im using spath to extract this data its giving blank results, the same data when tested with fewer records (<10) its able to extract all the key value pairs, is there a better way to extract from large event data ?? Please help me with the SPL query.Thanks  @yuanliu @gcusello 
Looks like there was some invisible junk  character(s) in the code. I got it working...Thanks for your help.
Could you provide some sample (dummy) events from both index?
My background is network engineering so I can't speak to any specific software processing benefits of HTTP vs HTTPS.  However, since HTTP is essentially plain text that would be fairly simple to take... See more...
My background is network engineering so I can't speak to any specific software processing benefits of HTTP vs HTTPS.  However, since HTTP is essentially plain text that would be fairly simple to take the packet off the wire.  Having to decrypt HTTPS would by the very nature of an additional step add processing requirements but as pointed out by others depending upon the compute power of your server(s) there usually isn't a noticeable hit or queuing of data.  Most systems today have compute that will outperform the physical network connection.
Hello, We are using DB Connect to collect logs from Oracle databases. We are using a rising mode input which requires the database statement be written in a way where column used for checkpointing... See more...
Hello, We are using DB Connect to collect logs from Oracle databases. We are using a rising mode input which requires the database statement be written in a way where column used for checkpointing is compared against a "?". Splunk DB Connect fills in the "?" with the check point value. Occasionally we will get "ORA-01843: Not a Valid Month errors" on inputs. The error itself is understood.* The question is, how do we rewrite the query to avoid this, when Splunk/DB Connect is handling how the "?" in the query  is replaced? Here is an example query: SELECT ACTION_NAME, CAST((EVENT_TIMESTAMP at TIME zone America/New_York) AS TIMESTAMP) extended_timestamp_est FROM AUDSYS.UNIFIED_AUDIT_TRAIL WHERE event_timestamp > ? ORDER BY EVENT_TIMESTAMP asc; How can we format the timestamp in the "?" in a way that the database understands and meets the DB Connect rising input requirement? Thank you! *(Our understanding is that it means that the timestamp/time format in the query is not understood by the database. The fact that it happens only occasionally means there is probably some offending row within the results set.)
I have nothing specific to offer.  In a previous job, I used a Python script to parse data and then restructure it so it was easier for Splunk to ingest.  It wasn't JSON (I think it was XML), but sti... See more...
I have nothing specific to offer.  In a previous job, I used a Python script to parse data and then restructure it so it was easier for Splunk to ingest.  It wasn't JSON (I think it was XML), but still should be pretty straightforward.
Hello, @ITWhisperer ! Yes, actually I'm editing the dashboard on https://itsi-*.splunkcloud.com/en-US/app/itsi/itsi_event_management? , and this is the view:  I'm looking for a way to make a si... See more...
Hello, @ITWhisperer ! Yes, actually I'm editing the dashboard on https://itsi-*.splunkcloud.com/en-US/app/itsi/itsi_event_management? , and this is the view:  I'm looking for a way to make a simple query in the results, like this code snippet: "dsQueryCounterSearch1": { "options": { "query": "| where AlertSource = AWS and AlertSeverity IN (6,5,4,3,1) | dedup Identifier | stats count as AWS", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, but doesn't return anything, any ideia how to refer the base search like the default querys?  P.S: This data:   
Hi Marnall, Thanks for your response. Former employee configures the sc4s. So I don't have any credentials for that. Here are the journalctl logs:  podman[2480968]: Trying to pull docker.io/splun... See more...
Hi Marnall, Thanks for your response. Former employee configures the sc4s. So I don't have any credentials for that. Here are the journalctl logs:  podman[2480968]: Trying to pull docker.io/splunk/scs:latest... podman[2480968]: Error: Error initializing source docker://splunk/scs:> podman[2480968]: denied: requested access to the resource is denied podman[2480968]: unauthorized: authentication required systemd[1]: sc4s.service: Control process exited, code=exited status=1> systemd[1]: sc4s.service: Failed with result 'exit-code'. systemd[1]: Failed to start SC4S Container. docker.io/splunk/scs:latest... this is not the same location which is written is Splunk documentation. Even I change it and restart, it is still failed. 
We have a 4 search head (SH) cluster.  I just found that, when running the command line query from each SH in the cluster,  it gives me the right number of events.  When running that same command l... See more...
We have a 4 search head (SH) cluster.  I just found that, when running the command line query from each SH in the cluster,  it gives me the right number of events.  When running that same command line query from a standalone SH, I get the duplicate results.  For example, from the Monitoring Console SH, the command line query gives me the duplicate results. Same search peers in both cases.
I have elected a new captain in my SH cluster a few times over the course of a couple days to see if there was some type of connection issue b/w specific SHs but still presenting same error. The only... See more...
I have elected a new captain in my SH cluster a few times over the course of a couple days to see if there was some type of connection issue b/w specific SHs but still presenting same error. The only changes in auth.conf were the ldap servers, the hosts, the groupings and permissions are all identical. 
The thing is, if se don’t split them at index time, the indexers will have even more work to do, as the structures can be huge. PS. I’m aware of the extra license usage here as well.
1. I'm not sure what you mean by "currEventId and prevEventId  in index_1 will have the same values as that of eventId of index_2". Reading it literally it would mean that currEventId and prevEventId... See more...
1. I'm not sure what you mean by "currEventId and prevEventId  in index_1 will have the same values as that of eventId of index_2". Reading it literally it would mean that currEventId and prevEventId have the same value. So you can use just one of those fields, right? 2. Your stats-based idea looks pretty sound but: - You use values(*) as * when you only use some of them. If you have many fields it's good to list them explicitly so you don't waste memory on storing fields you'll discard without using them - I'm not sure what the part after the stats command is supposed to do. OK the "where" command may only leave the results "matching  both sides of the join". But the rename/table? Rename just for the sake of it? Are you sure you have a field called index1Id?
index="aws" earliest=-7d@d latest=@d | search "Method request" "systemId" | rex field=_raw "\((?<messageId>[^)]+)\)[^:]+:\s*\{(?<messageFields>.*)\}" | rex field=messageFields "Id=(?<systemId>[^,]+)"... See more...
index="aws" earliest=-7d@d latest=@d | search "Method request" "systemId" | rex field=_raw "\((?<messageId>[^)]+)\)[^:]+:\s*\{(?<messageFields>.*)\}" | rex field=messageFields "Id=(?<systemId>[^,]+)" | rex field=messageFields "product=(?<product>[^,]+(?:,[^,]+)*)(?=, systemId=)" | rex field=_raw "Field=\"(?<eventFieldName>[^\"]+)\"" | rex field=_raw "FieldValue=\"(?<eventFieldValue>[^\"]+)\"" | rex field=_raw "type=\"(?<eventType>[^\"]+)\"" | search product="O,U" systemId!="0454c7f5" | dedup messageId | join type=left messageId [ | from datamodel:"getStatusCodes" | fields messageId, statusCode ] | join type=left systemId [ | from datamodel:"verifyCalls" | rename siteCoreId as systemId | eval Verified="Yes" | fields systemId, Verified ] | eval Verified=coalesce(Verified, "No") | table _time, messageId, systemId, statusCode, Verified | sort - _time | head 10000 Above is the Splunk Query.  When I search this query I get these fields in the output (_time, messageId, systemId, statusCode, Verified) but when I click on "Create Table View" only (_time, messageId, statusCode) fields are coming.