All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

My background is network engineering so I can't speak to any specific software processing benefits of HTTP vs HTTPS.  However, since HTTP is essentially plain text that would be fairly simple to take... See more...
My background is network engineering so I can't speak to any specific software processing benefits of HTTP vs HTTPS.  However, since HTTP is essentially plain text that would be fairly simple to take the packet off the wire.  Having to decrypt HTTPS would by the very nature of an additional step add processing requirements but as pointed out by others depending upon the compute power of your server(s) there usually isn't a noticeable hit or queuing of data.  Most systems today have compute that will outperform the physical network connection.
Hello, We are using DB Connect to collect logs from Oracle databases. We are using a rising mode input which requires the database statement be written in a way where column used for checkpointing... See more...
Hello, We are using DB Connect to collect logs from Oracle databases. We are using a rising mode input which requires the database statement be written in a way where column used for checkpointing is compared against a "?". Splunk DB Connect fills in the "?" with the check point value. Occasionally we will get "ORA-01843: Not a Valid Month errors" on inputs. The error itself is understood.* The question is, how do we rewrite the query to avoid this, when Splunk/DB Connect is handling how the "?" in the query  is replaced? Here is an example query: SELECT ACTION_NAME, CAST((EVENT_TIMESTAMP at TIME zone America/New_York) AS TIMESTAMP) extended_timestamp_est FROM AUDSYS.UNIFIED_AUDIT_TRAIL WHERE event_timestamp > ? ORDER BY EVENT_TIMESTAMP asc; How can we format the timestamp in the "?" in a way that the database understands and meets the DB Connect rising input requirement? Thank you! *(Our understanding is that it means that the timestamp/time format in the query is not understood by the database. The fact that it happens only occasionally means there is probably some offending row within the results set.)
I have nothing specific to offer.  In a previous job, I used a Python script to parse data and then restructure it so it was easier for Splunk to ingest.  It wasn't JSON (I think it was XML), but sti... See more...
I have nothing specific to offer.  In a previous job, I used a Python script to parse data and then restructure it so it was easier for Splunk to ingest.  It wasn't JSON (I think it was XML), but still should be pretty straightforward.
Hello, @ITWhisperer ! Yes, actually I'm editing the dashboard on https://itsi-*.splunkcloud.com/en-US/app/itsi/itsi_event_management? , and this is the view:  I'm looking for a way to make a si... See more...
Hello, @ITWhisperer ! Yes, actually I'm editing the dashboard on https://itsi-*.splunkcloud.com/en-US/app/itsi/itsi_event_management? , and this is the view:  I'm looking for a way to make a simple query in the results, like this code snippet: "dsQueryCounterSearch1": { "options": { "query": "| where AlertSource = AWS and AlertSeverity IN (6,5,4,3,1) | dedup Identifier | stats count as AWS", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" }, but doesn't return anything, any ideia how to refer the base search like the default querys?  P.S: This data:   
Hi Marnall, Thanks for your response. Former employee configures the sc4s. So I don't have any credentials for that. Here are the journalctl logs:  podman[2480968]: Trying to pull docker.io/splun... See more...
Hi Marnall, Thanks for your response. Former employee configures the sc4s. So I don't have any credentials for that. Here are the journalctl logs:  podman[2480968]: Trying to pull docker.io/splunk/scs:latest... podman[2480968]: Error: Error initializing source docker://splunk/scs:> podman[2480968]: denied: requested access to the resource is denied podman[2480968]: unauthorized: authentication required systemd[1]: sc4s.service: Control process exited, code=exited status=1> systemd[1]: sc4s.service: Failed with result 'exit-code'. systemd[1]: Failed to start SC4S Container. docker.io/splunk/scs:latest... this is not the same location which is written is Splunk documentation. Even I change it and restart, it is still failed. 
We have a 4 search head (SH) cluster.  I just found that, when running the command line query from each SH in the cluster,  it gives me the right number of events.  When running that same command l... See more...
We have a 4 search head (SH) cluster.  I just found that, when running the command line query from each SH in the cluster,  it gives me the right number of events.  When running that same command line query from a standalone SH, I get the duplicate results.  For example, from the Monitoring Console SH, the command line query gives me the duplicate results. Same search peers in both cases.
I have elected a new captain in my SH cluster a few times over the course of a couple days to see if there was some type of connection issue b/w specific SHs but still presenting same error. The only... See more...
I have elected a new captain in my SH cluster a few times over the course of a couple days to see if there was some type of connection issue b/w specific SHs but still presenting same error. The only changes in auth.conf were the ldap servers, the hosts, the groupings and permissions are all identical. 
The thing is, if se don’t split them at index time, the indexers will have even more work to do, as the structures can be huge. PS. I’m aware of the extra license usage here as well.
1. I'm not sure what you mean by "currEventId and prevEventId  in index_1 will have the same values as that of eventId of index_2". Reading it literally it would mean that currEventId and prevEventId... See more...
1. I'm not sure what you mean by "currEventId and prevEventId  in index_1 will have the same values as that of eventId of index_2". Reading it literally it would mean that currEventId and prevEventId have the same value. So you can use just one of those fields, right? 2. Your stats-based idea looks pretty sound but: - You use values(*) as * when you only use some of them. If you have many fields it's good to list them explicitly so you don't waste memory on storing fields you'll discard without using them - I'm not sure what the part after the stats command is supposed to do. OK the "where" command may only leave the results "matching  both sides of the join". But the rename/table? Rename just for the sake of it? Are you sure you have a field called index1Id?
index="aws" earliest=-7d@d latest=@d | search "Method request" "systemId" | rex field=_raw "\((?<messageId>[^)]+)\)[^:]+:\s*\{(?<messageFields>.*)\}" | rex field=messageFields "Id=(?<systemId>[^,]+)"... See more...
index="aws" earliest=-7d@d latest=@d | search "Method request" "systemId" | rex field=_raw "\((?<messageId>[^)]+)\)[^:]+:\s*\{(?<messageFields>.*)\}" | rex field=messageFields "Id=(?<systemId>[^,]+)" | rex field=messageFields "product=(?<product>[^,]+(?:,[^,]+)*)(?=, systemId=)" | rex field=_raw "Field=\"(?<eventFieldName>[^\"]+)\"" | rex field=_raw "FieldValue=\"(?<eventFieldValue>[^\"]+)\"" | rex field=_raw "type=\"(?<eventType>[^\"]+)\"" | search product="O,U" systemId!="0454c7f5" | dedup messageId | join type=left messageId [ | from datamodel:"getStatusCodes" | fields messageId, statusCode ] | join type=left systemId [ | from datamodel:"verifyCalls" | rename siteCoreId as systemId | eval Verified="Yes" | fields systemId, Verified ] | eval Verified=coalesce(Verified, "No") | table _time, messageId, systemId, statusCode, Verified | sort - _time | head 10000 Above is the Splunk Query.  When I search this query I get these fields in the output (_time, messageId, systemId, statusCode, Verified) but when I click on "Create Table View" only (_time, messageId, statusCode) fields are coming.
Your case is completely different because you want to keep some of the "outer" information shared between separate events (which actually isn't that good idea because your license usage will get mult... See more...
Your case is completely different because you want to keep some of the "outer" information shared between separate events (which actually isn't that good idea because your license usage will get multiplied on those events). As for the scripted input - see those resources for technicalities from Splunk side. Of course the internals - splitting the event - is entirely up to you. https://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/ScriptSetup https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs  
Hi @PickleRick, The JSON structure is very solid, and don’t change, except there can be many (+1000) or few (4) “stock_id”. You talked about scripting inputs as well, do you have any suggestion... See more...
Hi @PickleRick, The JSON structure is very solid, and don’t change, except there can be many (+1000) or few (4) “stock_id”. You talked about scripting inputs as well, do you have any suggestions/examples?
Hi @Gravoc , maybe you created the lookup in a different app and didn't add the Global sharing level to the lookup and to the definition. Instead the ES lookups are shared at Global level, probably... See more...
Hi @Gravoc , maybe you created the lookup in a different app and didn't add the Global sharing level to the lookup and to the definition. Instead the ES lookups are shared at Global level, probably for this reason it runs. Try to share as Global lookup and dedinition. Ciao. Giuseppe
Hi @richgalloway, Thanks for your input. Do you happen to have any scripting ideas for this?
Thank you for the response!!
That one relies on the fact that it was a simple array and could be cut with regexes into pieces. The splitting mechanism would break apart if the data changed - for example if there was another fiel... See more...
That one relies on the fact that it was a simple array and could be cut with regexes into pieces. The splitting mechanism would break apart if the data changed - for example if there was another field added except the "local" one to the "outer" json.
I have 2 indexes - index_1 and index_2 index_1 has the following fields index1Id currEventId prevEventId index_2 has the following fields index2Id eventId eventOrigin currEventId and prevEve... See more...
I have 2 indexes - index_1 and index_2 index_1 has the following fields index1Id currEventId prevEventId index_2 has the following fields index2Id eventId eventOrigin currEventId and prevEventId  in index_1 will have the same values as that of eventId of index_2 Now, I am trying to create the table of the following format index1Id prevEventId prevEventOrigin currEventId currEventOrigin   I tried the joins with the below query, but I see that the columns 3 and 5 are mostly blank. So, I am not sure what is wrong with the query.       index="index_1" | join type=left currEventId [ search index="index_2" | rename eventId as currEventId, eventOrigin as currEventOrigin | fields currEventId, currEventOrigin] | join type=left prevEventId [ search index="index_2" | rename eventId as prevEventId, eventOrigin as prevEventOrigin | fields prevEventId, prevEventOrigin] | table index1Id, prevEventOrigin, currEventOrigin, prevEventId, currEventId         And based on the online suggestions, I am trying the following approach, but couldn't complete it (works fine by populating all the columns)       (index="index_1") OR (index="index_2") | eval joiner=if(index="index_1", prevEventId, eventId) | stats values(*) as * by joiner | where prevEventId=eventId | rename eventOrigin AS previousEventOrigin, eventId as previousEventId | table index1Id, previousEventId, previousEventOrigin     Please let me know an efficient way to achieve the solution. Thanks   
Hi all, I was wanting to get an understanding on what the minimum permissions available to enable the log flow between GitHub and Splunk cloud, as going by the documentation for the app, the account... See more...
Hi all, I was wanting to get an understanding on what the minimum permissions available to enable the log flow between GitHub and Splunk cloud, as going by the documentation for the app, the account used to pull in the logs requires : admin:enterprise Full control of enterprises manage_billing:enterprise Read and write enterprise billing data read:enterprise Read enterprise profile data Can we reduce the amount of high privileged permissions required for the integration ?
I'm not sure it can be done reliably using props and transforms.  I'd use a scripted input to parse the data and re-format it.
And btw this one: How to split JSON array into Multiple events at Index Time?