All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Multiple questions on the same post might be misleading to others in future. Please ask as new question. For granting third-party access to Splunk dashboards, here are some options and best practi... See more...
Multiple questions on the same post might be misleading to others in future. Please ask as new question. For granting third-party access to Splunk dashboards, here are some options and best practices: Embedded reports: You can use Splunk's embed functionality to share specific reports or dashboards. This method allows you to control exactly what data is shared. Reference: https://docs.splunk.com/Documentation/Splunk/latest/Report/Embedscheduledreports Summary indexing and role-based access: Collect relevant data in a summary index with a specific source. Create a dedicated Splunk role for the third party. Map this role to their AD/LDAP group. Set search restrictions for this role to only access the required source/sourcetype, so you don't give access to the entire iindex. Hope this helps. Karma would be appreciated. 
Hi @ITWhisperer  The query is working, but the result is not as expected. The timeframe is also not returning the correct results. I need the highest count for the past 30 days, with the country hav... See more...
Hi @ITWhisperer  The query is working, but the result is not as expected. The timeframe is also not returning the correct results. I need the highest count for the past 30 days, with the country having the highest count appearing first, followed by other countries in descending order. The below is the current result.  
I'm trying to implement the Splunk Machine Learning Toolkit Query, found here: https://github.com/splunk/security_content/blob/develop/detections/cloud/abnormally_high_number_of_cloud_security_group_... See more...
I'm trying to implement the Splunk Machine Learning Toolkit Query, found here: https://github.com/splunk/security_content/blob/develop/detections/cloud/abnormally_high_number_of_cloud_security_group_api_calls.yml Actually just the first part: | tstats count as all_changes from datamodel=Change_test where All_Changes.object_category=* All_Changes.status=* by All_Changes.object_category All_Changes.status All_Changes.user But I'm getting this error   How do I fix this?
@sainag_splunk  Thank you is there another way? we are trying to not give third party users access to Splunk Indexes All the best!
Some sample searches to start with as requested. You can adjust the time spans and thresholds as needed. These queries should provide a foundation for your AUTHZ usage dashboard, balancing detail wi... See more...
Some sample searches to start with as requested. You can adjust the time spans and thresholds as needed. These queries should provide a foundation for your AUTHZ usage dashboard, balancing detail with performance. Total AUTHZ attempts:   index=yourindexname tag=name NOT "health-*" (words="Authentication words" OR MESSAGE_TEXT="Authentication word") | stats count as Total Successful vs. failed authorizations:   ``` index=yourindexname tag=name NOT "health-*" (words="Authentication words" OR MESSAGE_TEXT="Authentication word") | stats count(eval(INFO="success" OR match(ERROR,"user failure"))) as Success, count as Total | eval Failed = Total - Success | eval Success_Rate = round((Success/Total)*100,2) | table Success, Failed, Total, Success_Rate ```   Authorization attempts by host:   ``` index=yourindexname tag=name NOT "health-*" (words="Authentication words" OR MESSAGE_TEXT="Authentication word") | stats count as Attempts by host | sort -Attempts | head 10 ```   Peak authorization times and average response time:   ``` index=yourindexname tag=name NOT "health-*" (words="Authentication words" OR MESSAGE_TEXT="Authentication word") | timechart span=15min count as Attempts avg(duration) as avg_duration perc95(duration) as p95_duration | eval avg_duration=round(avg_duration/1000,2) | eval p95_duration=round(p95_duration/1000,2) ```
Hi @Meett! Thanks sharing the article, this looks closer to what I'm looking to achieve. Looking closer at this article, it still seems to reference an IAM user/access key ID for “Account A” in the ... See more...
Hi @Meett! Thanks sharing the article, this looks closer to what I'm looking to achieve. Looking closer at this article, it still seems to reference an IAM user/access key ID for “Account A” in the example. This is what I would like to avoid if possible. Is there any way for me to configure the trust policy on my AWS IAM role in my AWS account so that a Splunk-managed AWS IAM role in Splunk's account can be granted cross-account access to assume our role? Using sts:AssumeRole? Thanks!
HaltedCycleSecondsPerDayMA is not included in the chart command which is why it is removed from the event fields. What were you expecting to be there? How was it supposed to have been calculated (by ... See more...
HaltedCycleSecondsPerDayMA is not included in the chart command which is why it is removed from the event fields. What were you expecting to be there? How was it supposed to have been calculated (by the chart command)?
| stats count as Total by field1 field2 field3 Day Time Week | eval Week_{Week} = Total | stats values(Week_*) as Week_* by field1 field2 field3 Day Time | fillnull value=0 | eval Deviation=2*Week_41... See more...
| stats count as Total by field1 field2 field3 Day Time Week | eval Week_{Week} = Total | stats values(Week_*) as Week_* by field1 field2 field3 Day Time | fillnull value=0 | eval Deviation=2*Week_41/(Week_39+Week_40)
i have this on other panels but cant get it on a stacked column chart   | streamstats current=f last(Timestamp) as HaltedCycleLastTime by Cycle | eval HaltedCycleSecondsHalted=round(HaltedCycleLa... See more...
i have this on other panels but cant get it on a stacked column chart   | streamstats current=f last(Timestamp) as HaltedCycleLastTime by Cycle | eval HaltedCycleSecondsHalted=round(HaltedCycleLastTime - Timestamp,0) | eval HaltedCycleSecondsHalted=if(HaltedCycleSecondsHalted < 20,HaltedCycleSecondsHalted,0) | streamstats time_window=30d sum(HaltedCycleSecondsHalted) as HaltedCycleSecondsPerDayMA | eval HaltedCycleSecondsPerDayMA=round(HaltedCycleSecondsPerDayMA,0) | chart sum(HaltedCycleSecondsHalted) as HaltedSecondsPerDayPerCycle by CycleDate Cycle limit=0 this produces a stacked column based on the chart command , but in dashboard studio i expect to see HaltedCycleSecondsPerDayMA as a pickable field and i dont. I added to code as overlayfields but still not showing.
Thanks for the response, and I absolutely agree. Enabling the ingress was purely for testing purposes and I have no intention of doing this in the final deployment. I was able to figure out my issue... See more...
Thanks for the response, and I absolutely agree. Enabling the ingress was purely for testing purposes and I have no intention of doing this in the final deployment. I was able to figure out my issue, had I thoroughly read the Splunk docs about clustering I would caught a small section that says when using an L7 load balancer you need to ensure the LB configuration utilizes sticky/persistent user sessions. So essentially what was happening is I would get the login screen from idx-0 and when I would hit enter it would make a new get request and the LB would round robin the request to a different indexer pod which I had not logged into.
The only reason I can think of for this issue is a permission conflict problem. Did you look at the search.log as mentioned? Try comparing the search.log files for the working and non-working inst... See more...
The only reason I can think of for this issue is a permission conflict problem. Did you look at the search.log as mentioned? Try comparing the search.log files for the working and non-working instances. Without knowing the full search details, it's hard to validate exactly what's going on. There must be settings defined for this sourcetype. Try running the btool command and see if you can find anything relevant there.   splunk btool props list sourcetype --debug     Hope this helps
Not sure what is happening with your log in attempts but in reality I highly recommend you do not enable WebGUI on any indexer.  The cluster should only be managed by the CM since with the WebGUI the... See more...
Not sure what is happening with your log in attempts but in reality I highly recommend you do not enable WebGUI on any indexer.  The cluster should only be managed by the CM since with the WebGUI the ability for configurations to get out of sync is a very high risk.
On Browser Tests we have Auto-retry enabled and when a Test fails, Auto-retry kicks in and updates results,  so on the browser test Page Availability section clicking somewhere could see this flyout... See more...
On Browser Tests we have Auto-retry enabled and when a Test fails, Auto-retry kicks in and updates results,  so on the browser test Page Availability section clicking somewhere could see this flyout "Multiple runs found for Uptime",  How to view this section? (Having hard time )     
Splunk information is a snap shot in time and reflects the reality every 10 seconds. https://docs.splunk.com/Documentation/Splunk/9.3.1/RESTREF/RESTintrospect#server.2Fstatus.2Fresource-usage.2Fiost... See more...
Splunk information is a snap shot in time and reflects the reality every 10 seconds. https://docs.splunk.com/Documentation/Splunk/9.3.1/RESTREF/RESTintrospect#server.2Fstatus.2Fresource-usage.2Fiostats index=_introspection sourcetype=splunk_resource_usage component=Hostwide | eval pct_mem=round(('data.mem_used'/'data.mem')*100,2) | timechart span=10s max(pct_mem) as pct_mem That will give you the overall view. index=_introspection sourcetype=splunk_resource_usage component=PerProcess "data.mem_used"="*" | rename data.* as * | timechart span=10s max(mem_used) as mem_used by process_type This will break it down by process over time.   Review with your VM metrics, perhaps VMC is reporting averages or median per time period.
We have some tokens that are due to expire shortly. Q1: Does the 'Default' token automatically rotate? Q2: How do you manually rotate a token using the dashboard? (I am aware of the API option) Q3... See more...
We have some tokens that are due to expire shortly. Q1: Does the 'Default' token automatically rotate? Q2: How do you manually rotate a token using the dashboard? (I am aware of the API option) Q3: If the API call is the only option, what permissions are required to make the 'rotate' API call? Thanks in anticipation. Ian
Yes those are two separate issues. 
Hi  Can someone please tell me how we can compare the value of a particular day with the value of the same day of last week and create a new field as deviation.  Example :  Below command generates... See more...
Hi  Can someone please tell me how we can compare the value of a particular day with the value of the same day of last week and create a new field as deviation.  Example :  Below command generates the output as below :  | stats sum(Number_Events) as TOTAL by FIeld1 FIeld2  FIeld3 Day  Time Week_of_year Total We need the output like below :  1. In tabular form : Is it possible to have an output like below :  2. If point 1 is possible to be created , then Is it possible to have a time-chart with 3 lines over the 24 hours of the day . Example of data for 3 hours is attached  1 line corresponds to Week of year -2 (39) 2nd line corresponds to Week of year -1 (40) 3rd line corresponds to Week of year (41)   Thanks in advance to help me out.   
I find this very confusing, like you have overlapped two seperate issues.
Same problem here, it's difficult to find how a simple option like option name=count works in studio. I want to display more than 100 rows without a next button. After searching in studio dashboar... See more...
Same problem here, it's difficult to find how a simple option like option name=count works in studio. I want to display more than 100 rows without a next button. After searching in studio dashboard found it and indeed it is simple. Data display -  rows displayed
Create an app to be pushed from the CM to the IDX tier and put in an inputs.conf file. https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Inputsconf#HTTP_Event_Collector_.28HEC.29_-_Local_stan... See more...
Create an app to be pushed from the CM to the IDX tier and put in an inputs.conf file. https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Inputsconf#HTTP_Event_Collector_.28HEC.29_-_Local_stanza_for_each_token [http://sc4s] token = XXXXX index = target-index-name ### This is the bare minimum I suggest ### SC4S may require a sourcetype, other vendor sources may already come with that value assigned