All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Something like that. Explanation - Splunk works (except for all the maintenance stuff that happens behind the scenes) generally in two pipelines. One set of things happens during event's ingestion ... See more...
Something like that. Explanation - Splunk works (except for all the maintenance stuff that happens behind the scenes) generally in two pipelines. One set of things happens during event's ingestion - so called index-time operations. And after the event is indexed there are search-time operations which happen during searching from indexes and further processing. So during indexing you rewrite the sourcetype metadata field using TRANSFORMs. The event is getting indexed with the new sourcetype. Then when you search the event it is getting parsed according to the sourcetype-defined search-time extractions (REPORT and EXTRACT settings). And they are defined separately for each of "new" sourcetypes. This is actually a quite typical use case - split a "combined" sourcetype during indexing into separate ones and define different search-time configurations for those sourcetypes.
I ran into this issue today and came upon this post. The error I was getting was "Non-Displayable Column Type BINARY" for several columns in my select query. As it turns out, you can modify your sele... See more...
I ran into this issue today and came upon this post. The error I was getting was "Non-Displayable Column Type BINARY" for several columns in my select query. As it turns out, you can modify your select query to cast columns as different types in MySQL, which I did and it solved my issue   SELECT      CAST(field_name AS CHAR) AS field_name, FROM      table_name I hope this is helpful to anyone else who encounters the same issue.
Hi @Nawaz Ali.Mohammad  Would you please confirm if there is still no public API to fetch the JVM details ? Also, is there a public API to get the application language, e.g. Java, NodeJS, PHP ? If... See more...
Hi @Nawaz Ali.Mohammad  Would you please confirm if there is still no public API to fetch the JVM details ? Also, is there a public API to get the application language, e.g. Java, NodeJS, PHP ? If not, is it possible to use custom attributes to get such details? Or can we access/execute a query through an API call?
Thanks a lot. this really helps to solve the current problem. But is there a generic way to solve this? As we would have to write this condition for all the API's which would have this pattern or fal... See more...
Thanks a lot. this really helps to solve the current problem. But is there a generic way to solve this? As we would have to write this condition for all the API's which would have this pattern or fall into this type of API pattern.  e.g. | eval url=if(mvindex(split(url,"/"),1)="getFile","/getFile",url) | eval url=if(mvindex(split(url,"/"),1)="import","/import",url)  
This is a followup question to the solution on this thread: https://community.splunk.com/t5/Getting-Data-In/create-multiple-sourcetypes-from-single-syslog-source/m-p/701337/highlight/false#M116063 ... See more...
This is a followup question to the solution on this thread: https://community.splunk.com/t5/Getting-Data-In/create-multiple-sourcetypes-from-single-syslog-source/m-p/701337/highlight/false#M116063 I'm trying to do exactly what the original question asked but I need to apply different DELIM/FIELDS values to the different sourcetypes I create this way. The solution says that once the new sourcetype is created "...just use additional transforms entries with regular expressions that fit the specific subset of data..." does this mean that if I want to further extract fields from the new sourcetype I can only do that using TRANSFORMS from that point forward or would I be able to put a new stanza further down in the props.conf for [my_new_st] and use additional REPORTs or EXTRACTs that only apply to that new sourcetype? For example, can I do something like the following?: Description: first split the individual events based on the value regex-matched on the 5th field then do different field extracts for each of the new sourcetypes.      props.conf: [syslog] TRANSFORMS-create_sourcetype1 = create_sourcetype1 TRANSFORMS-create_sourcetype2 = create_sourcetype2 [sourcetype1] REPORT-extract = custom_delim_sourcetype1 [sourcetype2] REPORT-extract = custom_delim_sourcetype2           transforms.conf: [create_sourcetype1] REGEX = ^(?:[^ \n]* ){5}(my_log_name_1:)\s DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::sourcetype1 [create_sourcetype2] REGEX = ^(?:[^ \n]* ){5}(my_log_name_2:)\s DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::sourcetype2 [custom_delim_sourcetype1] DELIMS = " " FIELDS = d_month,d_date,d_time,d_source,d_logname,d_info,cs_url,cs_bytes,cs_port [custom_delim_sourcetype2] DELIMS = " " FIELDS = d_month,d_date,d_time,d_source,d_logname,d_info,cs_username,sc_http_status      
will do, thanks
@StephenD1  Start a new thread, this is a better practice than trying to resurrect a post that has already been answered. You can then reference this previous response.
HaltedCyclesPerDayMA is computed in the eval line above as shown the query gives me a stacked column chart (stacked by cycle), i want the HaltedCyclesPerDayMA as a line overlay (showing the moving a... See more...
HaltedCyclesPerDayMA is computed in the eval line above as shown the query gives me a stacked column chart (stacked by cycle), i want the HaltedCyclesPerDayMA as a line overlay (showing the moving average on top of the raw data
We have started to notice that since our recent upgrade to Splunk Cloud 9.2 we have been facing authentication issues where our add-ons would stop working due to authentication like "No AWS account n... See more...
We have started to notice that since our recent upgrade to Splunk Cloud 9.2 we have been facing authentication issues where our add-ons would stop working due to authentication like "No AWS account named" or "Unable to obtain access token" and "Invalid client secret provided" for our AWS and Azure add-ons basically anything requiring Splunk to decrypt credentials stored in passwords.conf has anyone else had this problem? We're currently engaged with Splunk Support to find a root cause for this problem. I would love to know if anyone else had faced this same problem since upgrading to 9.2?
Sorry to resurrect this thread but I have a question about your last paragraph: when you say "...just use additional transforms entries with regular expressions that fit the specific subset of data..... See more...
Sorry to resurrect this thread but I have a question about your last paragraph: when you say "...just use additional transforms entries with regular expressions that fit the specific subset of data..." does this mean that if I want to further extract fields from the new sourcetype=my_new_st, for example, I have to do that using TRANSFORMS? In other words, would I be able to put a new stanza further down in the props.conf for ``` [my_new_st] ... ``` then start using additional REPORTs or EXTRACTs that only apply to that new sourcetype?
HI @ITWhisperer  Thanks for the response.  But instead of hard-coading the week number to generate the deviation  | eval Deviation=2*Week_41/(Week_39+Week_40) the week Can we dynamically gi... See more...
HI @ITWhisperer  Thanks for the response.  But instead of hard-coading the week number to generate the deviation  | eval Deviation=2*Week_41/(Week_39+Week_40) the week Can we dynamically give the dynamic value of the week as below :  | eval Deviation=2*Week_{current_week}/(Week_{current_week - 1} +Week_{current_week - 2}) Thanks in advance. 
Multiple questions on the same post might be misleading to others in future. Please ask as new question. For granting third-party access to Splunk dashboards, here are some options and best practi... See more...
Multiple questions on the same post might be misleading to others in future. Please ask as new question. For granting third-party access to Splunk dashboards, here are some options and best practices: Embedded reports: You can use Splunk's embed functionality to share specific reports or dashboards. This method allows you to control exactly what data is shared. Reference: https://docs.splunk.com/Documentation/Splunk/latest/Report/Embedscheduledreports Summary indexing and role-based access: Collect relevant data in a summary index with a specific source. Create a dedicated Splunk role for the third party. Map this role to their AD/LDAP group. Set search restrictions for this role to only access the required source/sourcetype, so you don't give access to the entire iindex. Hope this helps. Karma would be appreciated. 
Hi @ITWhisperer  The query is working, but the result is not as expected. The timeframe is also not returning the correct results. I need the highest count for the past 30 days, with the country hav... See more...
Hi @ITWhisperer  The query is working, but the result is not as expected. The timeframe is also not returning the correct results. I need the highest count for the past 30 days, with the country having the highest count appearing first, followed by other countries in descending order. The below is the current result.  
I'm trying to implement the Splunk Machine Learning Toolkit Query, found here: https://github.com/splunk/security_content/blob/develop/detections/cloud/abnormally_high_number_of_cloud_security_group_... See more...
I'm trying to implement the Splunk Machine Learning Toolkit Query, found here: https://github.com/splunk/security_content/blob/develop/detections/cloud/abnormally_high_number_of_cloud_security_group_api_calls.yml Actually just the first part: | tstats count as all_changes from datamodel=Change_test where All_Changes.object_category=* All_Changes.status=* by All_Changes.object_category All_Changes.status All_Changes.user But I'm getting this error   How do I fix this?
@sainag_splunk  Thank you is there another way? we are trying to not give third party users access to Splunk Indexes All the best!
Some sample searches to start with as requested. You can adjust the time spans and thresholds as needed. These queries should provide a foundation for your AUTHZ usage dashboard, balancing detail wi... See more...
Some sample searches to start with as requested. You can adjust the time spans and thresholds as needed. These queries should provide a foundation for your AUTHZ usage dashboard, balancing detail with performance. Total AUTHZ attempts:   index=yourindexname tag=name NOT "health-*" (words="Authentication words" OR MESSAGE_TEXT="Authentication word") | stats count as Total Successful vs. failed authorizations:   ``` index=yourindexname tag=name NOT "health-*" (words="Authentication words" OR MESSAGE_TEXT="Authentication word") | stats count(eval(INFO="success" OR match(ERROR,"user failure"))) as Success, count as Total | eval Failed = Total - Success | eval Success_Rate = round((Success/Total)*100,2) | table Success, Failed, Total, Success_Rate ```   Authorization attempts by host:   ``` index=yourindexname tag=name NOT "health-*" (words="Authentication words" OR MESSAGE_TEXT="Authentication word") | stats count as Attempts by host | sort -Attempts | head 10 ```   Peak authorization times and average response time:   ``` index=yourindexname tag=name NOT "health-*" (words="Authentication words" OR MESSAGE_TEXT="Authentication word") | timechart span=15min count as Attempts avg(duration) as avg_duration perc95(duration) as p95_duration | eval avg_duration=round(avg_duration/1000,2) | eval p95_duration=round(p95_duration/1000,2) ```
Hi @Meett! Thanks sharing the article, this looks closer to what I'm looking to achieve. Looking closer at this article, it still seems to reference an IAM user/access key ID for “Account A” in the ... See more...
Hi @Meett! Thanks sharing the article, this looks closer to what I'm looking to achieve. Looking closer at this article, it still seems to reference an IAM user/access key ID for “Account A” in the example. This is what I would like to avoid if possible. Is there any way for me to configure the trust policy on my AWS IAM role in my AWS account so that a Splunk-managed AWS IAM role in Splunk's account can be granted cross-account access to assume our role? Using sts:AssumeRole? Thanks!
HaltedCycleSecondsPerDayMA is not included in the chart command which is why it is removed from the event fields. What were you expecting to be there? How was it supposed to have been calculated (by ... See more...
HaltedCycleSecondsPerDayMA is not included in the chart command which is why it is removed from the event fields. What were you expecting to be there? How was it supposed to have been calculated (by the chart command)?
| stats count as Total by field1 field2 field3 Day Time Week | eval Week_{Week} = Total | stats values(Week_*) as Week_* by field1 field2 field3 Day Time | fillnull value=0 | eval Deviation=2*Week_41... See more...
| stats count as Total by field1 field2 field3 Day Time Week | eval Week_{Week} = Total | stats values(Week_*) as Week_* by field1 field2 field3 Day Time | fillnull value=0 | eval Deviation=2*Week_41/(Week_39+Week_40)
i have this on other panels but cant get it on a stacked column chart   | streamstats current=f last(Timestamp) as HaltedCycleLastTime by Cycle | eval HaltedCycleSecondsHalted=round(HaltedCycleLa... See more...
i have this on other panels but cant get it on a stacked column chart   | streamstats current=f last(Timestamp) as HaltedCycleLastTime by Cycle | eval HaltedCycleSecondsHalted=round(HaltedCycleLastTime - Timestamp,0) | eval HaltedCycleSecondsHalted=if(HaltedCycleSecondsHalted < 20,HaltedCycleSecondsHalted,0) | streamstats time_window=30d sum(HaltedCycleSecondsHalted) as HaltedCycleSecondsPerDayMA | eval HaltedCycleSecondsPerDayMA=round(HaltedCycleSecondsPerDayMA,0) | chart sum(HaltedCycleSecondsHalted) as HaltedSecondsPerDayPerCycle by CycleDate Cycle limit=0 this produces a stacked column based on the chart command , but in dashboard studio i expect to see HaltedCycleSecondsPerDayMA as a pickable field and i dont. I added to code as overlayfields but still not showing.