All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Can someone please help me with this rule? I have been assigned to create a bunch of similar rules but I am struggling with a few, this is what I have so far... =====================================... See more...
Can someone please help me with this rule? I have been assigned to create a bunch of similar rules but I am struggling with a few, this is what I have so far... ======================================== Strategy Abstract The strategy will function as follows: Utilize tstats to summarize SMB traffic data. Identify internal hosts scanning for open SMB ports outbound to external hosts. Technical Context This rule focuses on detecting abnormal outbound SMB traffic. =============================================================================== SPL is generating 0 errors but also 0 matches.    | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_ip) as dest_ip dc(All_Traffic.dest_ip) as unique_dest_ips values(All_Traffic.dest_port) as dest_port values(All_Traffic.action) as action values(sourcetype) as sourcetype     from datamodel=Network_Traffic.All_Traffic     where (All_Traffic.src_ip [inputlookup internal_ranges.csv | table src_ip] OR NOT All_Traffic.dest_ip [ inputlookup internal_ranges.csv | table dest_ip]) AND All_Traffic.dest_port=445     by _time All_Traffic.src_ip span=5m  | `drop_dm_object_name(All_Traffic)`  | where unique_dest_ips>=50 | search NOT [ | inputlookup scanners.csv | table ip | rename ip as src_ip] | search NOT src_ip = "x.x.x.x" | head 51      
Hi Community, I'm fairly inexperienced when it comes to anything other than quite basic searches, so my apologies in advance. I have a field which returns several values, and I only wish to return ... See more...
Hi Community, I'm fairly inexperienced when it comes to anything other than quite basic searches, so my apologies in advance. I have a field which returns several values, and I only wish to return one in my searches. The field name is "triggeredComponents{}.triggeredFilters{}.trigger.value" and it returns several values of different types, for example: 1 5 out text / text text text hostname1 hostname2 445 I only wish to retrieve and view the "text / text text text" value, and then pop that into a |stats command. Please can someone offer some advise? Many thanks in advance!
Using the Add-On builder i built a custom Python app to collect some asset information over API. I'll preface all of this by saying my custom Python code in VisCo works all the time, every time. n... See more...
Using the Add-On builder i built a custom Python app to collect some asset information over API. I'll preface all of this by saying my custom Python code in VisCo works all the time, every time. no hiccups. Using a select statement in the API request, I can gather specific fields. The more fields I define, the more issues I run into in Splunk. Basically it feels like the app is rate limited. I would expect it to run to just under an hour. It usually fails after 10 minutes and starts again at the 2 hour (7200 seconds) configured interval time on the input page. If I define fewer fields in the select request, it runs for a little longer but still ends up failing and obviously I'm not getting the data I want. If I set the bare minimum one field it runs for the expected time, stops, and starts again at its configured interval. I'm hesitant to say what platform but it is cloud based. I'm running my app from an on-prem heavy forwarder indexing to Splunk Cloud. The input interval config is 2 hours. The python script iterates through requests due to paging limitations and delays between requests based on some math I did with the total number of assets and pages. Its about 3 seconds between requests. But again, my code works flawlessly running in VisCo. That target API isn't rate limiting me due to the scripted interval. At least, I have no reason to believe that it is. I've opened a ticket with Splunk but I wanted to see if anyone else has experience with the Splunk Add-on Builder and the custom python modules.
Hello Working on updating a dashboard panel to handle numerical values. I want the panel to show red anytime the count is not zero. This always works when the number is positive BUT occasionally we... See more...
Hello Working on updating a dashboard panel to handle numerical values. I want the panel to show red anytime the count is not zero. This always works when the number is positive BUT occasionally we have a negative number, such as "-3" is there a way to make negative values red as well? Basically, anything that isn't zero should be red. Thanks for the help!
Hi at all, I need to create some Correlation Searches on Splunk audit events, but I didn't find any documentation about the events to search, e.g. I don't know how to identify creation of a new role... See more...
Hi at all, I need to create some Correlation Searches on Splunk audit events, but I didn't find any documentation about the events to search, e.g. I don't know how to identify creation of a new role or updates to an existing one, I found only action=edit_roles, but I can only know the associted user and not the changed role. Can anyone idicate an url to find Splunk audit information? Ciao. Giuseppe
Hi Splunkers, today I have a question related not on a "technical how": my doubt is related to a "best practice". Environment: a Splunk Cloud combo instance (Core + Enterprise Security) with some H... See more...
Hi Splunkers, today I have a question related not on a "technical how": my doubt is related to a "best practice". Environment: a Splunk Cloud combo instance (Core + Enterprise Security) with some Heavy Forwarders. Task: perform some field extractions Details: addon for parsing are already installed and configured, so we have not to create new ones, we should simply enrich/expand existing ones. Those addons are installed on both cloud components and HFs. The point is this: due we already have some addon for parsing, we could simply edit their props.conf and transforms.conf files; of course, due we have addon installed on both cloud components and HFs, we have to perform changes on all of them.  For example, performing addon editing only on cloud components with GUI Field Extraction imply that new fields will be parsed at index time on them, because they will be not pre parsed by HFs. Plus, we know that we should create a copy of those file on local folder, to avoid editing the default one, etcetera, etcetera, etcetera.  But, at the same time, for our SOC we created a custom app used as container to store all customizations performed by/for them, following one of Splunk best practice. We store there reports, alerts, and so on: with "we store there" I mean that, when we create something and choose an app context, we set our custom SOC one. With this choice, we could simply perform a field extraction with GUI and assign as app context our custom one; of course, with this technique, custom regex are saved only on cloud components and not on the HFs. So, my wondering is: when we speak about field extraction, if we consider that pre parsing performed by HF is desired but NOT mandatory, what is the best choice? Maintain all field extractions on addon or split between OOT one and custom one, using our custom SOC app?
Hello everyone,   I am still relatively new to Splunk. I would like to add an additionalTooltipField to my maps visualization, so that when you hover over a marker point, more data details about th... See more...
Hello everyone,   I am still relatively new to Splunk. I would like to add an additionalTooltipField to my maps visualization, so that when you hover over a marker point, more data details about the marker appear. I have formulated the following query: source="NeueIP.csv" host="IP" sourcetype="csv" | rename Breitengrad as latitude, L__ngengrad as longitude, Stadt as Stadt, Kurzbeschreibung as Beschreibung | eval CPU_Auslastung = replace(CPU_Auslastung, "%","") | eval CPU_Auslastung = tonumber(CPU_Auslastung) | eval CPU_Color = case( CPU_Auslastung > 80.0, "#de1d20", CPU_Auslastung > 50.0, "#54afda", true(), "#4ade1d" ) | table Stadt, latitude, longitude, Kurzbeschreibung, Langbeschreibung, CPU_Auslastung, CPU_Color | eval _time = now()     And I tried to adjust some things in the source code so that the additionalTooltipField appears. Last of all: "visualizations": {  "viz_map_1": {  "type": "splunk.map",  "options": {  "center": [  50.35,  17.36  ],  "zoom": 4,  "layers": [  {  "type": "marker",  "latitude": "> primary | seriesByName('latitude')",  "longitude": "> primary | seriesByName('longitude')",  "dataColors": ">primary | seriesByName(\"CPU_Auslastung\") | rangeValue(config)",  "additionalTooltipFields": ">primary | seriesByName(\"Stadt\")",  "markerOptions": {  "additionalTooltipFields": [  "Stadt",  "Kurzbeschreibung"  ] },  "hoverMarkerPanel": {  "enabled": true,  "fields": [  "Stadt",  "Kurzbeschreibung"  ]  }  }  ]  },   My sample data is as follows: Stadt, Breitengrad, Längengrad, Kurzbeschreibung, Langbeschreibung, CPU_Auslastung Berlin, 52.52, 13.405, BE, Hauptstadt Deutschlands, 45% London, 51.5074, -0.1278, LDN, Hauptstadt des Vereinigten Königreichs, 65% Paris, 48.8566, 2.3522, PAR, Hauptstadt Frankreichs, 78%     Is my plan possible?   Thanks for your help in advance!!  
Hi Team,   In role we are providing the user as read only access, and set up the capabilities, Inheritress , resources, and restrictions.  But that user able to delete the query and delete the repo... See more...
Hi Team,   In role we are providing the user as read only access, and set up the capabilities, Inheritress , resources, and restrictions.  But that user able to delete the query and delete the report also, how do we hide delete option in the report?   Please guide the process.        
 Hi,  Does anyone have experience in monitoring Azure Integration Services with AppDynamics? Suggestions on a setup would be appreciated. The services will be calling an on-premise .NET application ... See more...
 Hi,  Does anyone have experience in monitoring Azure Integration Services with AppDynamics? Suggestions on a setup would be appreciated. The services will be calling an on-premise .NET application the ability to drilldown downstream is not a must but would be really nice to have. br Kjell Lönnqvist
Hi All, I have a dashboard which has 3 radio buttons both,TypeA and TypeB. Also i have a table. The requirement is that, if i select both or TypeA in radio buttons, columnA and columnB in the table ... See more...
Hi All, I have a dashboard which has 3 radio buttons both,TypeA and TypeB. Also i have a table. The requirement is that, if i select both or TypeA in radio buttons, columnA and columnB in the table should be highlighted. If i select the TypeB, only columnA should be highlighted. How can i achieve this? I have tried using color palette expression like below. But no luck. Anyone have solution for this? <format type="color" field="columnA"> <colorPalette type="list">["#00FFFF"]</colorPalette> </format> <format type="color" field="columnB"> <colorPalette type="expression">if(match(Type,"TypeB")," ", "#00FFFF")</colorPalette> </format>
Hello, Firstly, requirement is we want to monitor the docker containers present on the server, and we were tried aproch to istrument our machine agent inside each docker container, but by this aproc... See more...
Hello, Firstly, requirement is we want to monitor the docker containers present on the server, and we were tried aproch to istrument our machine agent inside each docker container, but by this aproch our docker image is going to heavy and our application performace may decrease beacause of this approch. So, we had instrumented machine agent on docker container, which is present on that local server and that machine agent correctly working and also providing metrics for some containers but not for all containers, so can anyone help me to solve this issue. we have take reffernce from the github repository(https://github.com/Appdynamics/docker-machine-agent.git), but in our environment there are 40 containers and by this method it is monitoring only 9 containers so can anyone help me to solve this issue. here you can see only 9 containers. Regards, Dishant
Hi Splunk Experts, We are trying to integrate CA UIM with Splunk to send Splunk alerts to CA UIM. So, we had installed Nimbus (CA UIM) add-on and configured Alert to trigger events and also we had i... See more...
Hi Splunk Experts, We are trying to integrate CA UIM with Splunk to send Splunk alerts to CA UIM. So, we had installed Nimbus (CA UIM) add-on and configured Alert to trigger events and also we had installed nimbus agent on the Splunk enterprise server where is was deployed on Linux x64 as per the instructions but no alerts are triggering for search even if the condition match.  but when we are checking manually we can see many triggered alerts under trigger section. So, can any one suggest what could be the issue and suggest me to resolve it. Below is the search and alert configuration.   Thank you in advance. Regards, Eshwar    
Hello, One of our MF Local Administrative Group Member rule is generating a significant number of alerts because sccmadmin group removed from MF member server, assistance is needed in refining this ... See more...
Hello, One of our MF Local Administrative Group Member rule is generating a significant number of alerts because sccmadmin group removed from MF member server, assistance is needed in refining this search to minimize unnecessary alerts.   index=foo sourcetype=XmlWinEventLog (EventCode=4732) dest="mf" user!="nt service" NOT (EventCode="4732" src_user="root" MemberSid="Domain Admins" Group_Name="Administrators") NOT (EventCode="4732" MemberSid="NT SERVICE\\*" (Group_Name="Administrators" OR Group_Name="Remote Desktop Users")) | eval user=lower(MemberSid) | eval src_user=lower(src_user) | stats values(user) as user, values(Group_Domain) as Group_Domain, values(dest) as dest by src_user,Group_Name,EventCode,signature _time Thanks...
Hi Guys (and Gals), Hopefully quick question, and it's late, so my brain isn't firing quickly/properly. I need to run a query to get the ingestion over time over two variables: host, index In th... See more...
Hi Guys (and Gals), Hopefully quick question, and it's late, so my brain isn't firing quickly/properly. I need to run a query to get the ingestion over time over two variables: host, index In the specific case, need to determine if the data ingestion from a specific set of hosts, and whether the data inbound has been increasing more than normally expected.  So the query would look like:   index=linuxos host IN (server1, server2, server3...) [or possibly you may have a lookup of the set of hosts] | eval sum(the data per host over hour {or whatever regular chunk of time you want} for a 7 day period) | timechart xyz ==> chart over a line graph     Also, if there is relevant dashboard/console in the monitoring console I am not thinking of please direct me to the relevant menu or docs. Appreciate any assistance.    
Join The Event Get Resiliency in the Cloud on January 18th, 2024 (8:30AM PST)  You will hear from the industry experts from Pacific Dental Services, IDC, The Futurum Group CEO, Daniel Newman and Spl... See more...
Join The Event Get Resiliency in the Cloud on January 18th, 2024 (8:30AM PST)  You will hear from the industry experts from Pacific Dental Services, IDC, The Futurum Group CEO, Daniel Newman and Splunk leaders on how to build resilience for your expansion to the cloud. You will learn about the drivers that lead enterprises to build data-centric security and observability use cases on Splunk Cloud Platform, delivered as a service and it's benefits.  Additionally, you will learn about: How digital transformation is influencing businesses expand to cloud  Cloud transformation journey from Pacific Dental Services with Splunk New advancements in Splunk Cloud Platform that accelerate journey to cloud Achieving faster value realization with Splunk services Register today for the event Get Resiliency in the Cloud happening on January 18th, 2024 (8:30AM PST) 
Hello, I have a search that's coming back with 'src' which is the source IP of a client, and I have a lookup file  called "networks.csv" that has a column with a header 'ip' which is a list of CID... See more...
Hello, I have a search that's coming back with 'src' which is the source IP of a client, and I have a lookup file  called "networks.csv" that has a column with a header 'ip' which is a list of CIDR networks. I have gone into the lookup definitions and set under the advanced options "CIDR(ip)" for that lookup file. I can see the headers being automatically being extracted in that UI. However, when I run the search and try to pull the category for the 'src' respective network, it does not work.  basesearch | lookup networks.csv ip as src_ip OUTPUT category I have validated that it's a CIDR issue by doing a "...| rex mode=sed field=src_ip " and placing a literal CIDR entry in there and having the category come out. Thank you for your help!
Hi,  Is it possible to create a tab on a dashboard while also creating a redirection to a new dashboard when the tab is clicked without having to click the clone the dashboard. Thanks in adva... See more...
Hi,  Is it possible to create a tab on a dashboard while also creating a redirection to a new dashboard when the tab is clicked without having to click the clone the dashboard. Thanks in advance! 
I have an index that is receiving JSON data from a HEC, but with 2 different data sets and about 2M per day: DS1 {guid:"a1b2",resourceId="enum",sourcenumber:"55512345678"} DS2 {guid:"a1b2",re... See more...
I have an index that is receiving JSON data from a HEC, but with 2 different data sets and about 2M per day: DS1 {guid:"a1b2",resourceId="enum",sourcenumber:"55512345678"} DS2 {guid:"a1b2",resourceId="enum",disposition:"TERMINATED"} Now, counting terminated is easy and fast, this runs in 1s for all calls yesterday.   index="my_data" resourceId="enum*" disposition="TERMINATED" | stats count   But counting TOP 10 TERMINATED is not so much, this takes almost 10m on the same interval (yesterday):   index="my_data" resourceId="enum*" | stats values(*) as * by guid | search disposition="TERMINATED" | stats count by sourcenumber   I found some help before using subsearches and found this | format thing to pass in more than 10k values, but this still takes ~8m to run:   index="my_data" resourceId="enum*" NOT disposition=* [ search index="my_data" resourceId="enum*" disposition="TERMINATED" | fields guid | format ] | stats count by sourcenumber | sort -count   The issue is I need 'data' from DS1 when it 'matches guid' from DS2, but I've learned that 'join' isn't very good for Splunk (it's not SQL!) Thoughts on the 'most optimized' way to get Top 10 of data in DS1 where certain conditions of DS2? NOTE - I asked a similar question here, but can't figure out how to get the same method to work since it's not excluding, it's more 'joining' the data: https://community.splunk.com/t5/Splunk-Search/What-s-best-way-to-count-calls-from-main-search-excluding-sub/m-p/658884 As always, thank you!!!    
I'm not exactly sure what I need here.  I have a multiselect:       <input type="multiselect" token="t_resource"> <label>Resource</label> <choice value="*">All</choice> <prefi... See more...
I'm not exactly sure what I need here.  I have a multiselect:       <input type="multiselect" token="t_resource"> <label>Resource</label> <choice value="*">All</choice> <prefix>IN(</prefix> <suffix>)</suffix> <delimiter>,</delimiter> <fieldForLabel>resource</fieldForLabel> <fieldForValue>resource</fieldForValue> <search base="base_search"> <query>| dedup resource | table resource</query>       Table visual search:     | search status_code $t_code$ resource $t_resource$ HourBucket = $t_hour$ | bin _time span=1h | stats count(status_code) as StatusCodeCount by _time, status_code, resource | eventstats sum(StatusCodeCount) as TotalCount by _time, resource | eval PercentageTotalCount = round((StatusCodeCount / TotalCount) * 100, 2) | eval 200Flag = case( status_code=200 AND PercentageTotalCount < 89, "Red", status_code=200 AND PercentageTotalCount < 94, "Yellow", status_code=200 AND PercentageTotalCount <= 100, "Green", 1=1, null) | eval HourBucket = strftime(_time, "%H") | table _time, HourBucket, resource, status_code, StatusCodeCount, PercentageTotalCount, 200Flag     I also have a table, sample data below: _time resource 1/10/2024 Red 1/10/2024 Green   When the user select the multiselect dropdown and selects "ALL" (which is the default) the resource column should aggregate all the resource and display the resource as "All". But If the user select individual resources, such as "Red" and "Green" these should be shown and broken down by resource.