All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello,  I have a dashboard that shows network traffic based on 4 simple text boxes for the user to input SRC_IP SRC_PORT DEST_IP DEST_PORT  How can we create a filter such as "EQUAL" and "NOT ... See more...
Hello,  I have a dashboard that shows network traffic based on 4 simple text boxes for the user to input SRC_IP SRC_PORT DEST_IP DEST_PORT  How can we create a filter such as "EQUAL" and "NOT EQUAL TO" options for a  DEST_IP input box ?  Requirement is that end user should be to select "NOT EQUAL and enter an ip-address or range to exclude whatever they want to  in the input box and accordingly the panels will display the corresponding data. For example , if they want to exclude all private ips (10.x.x.x)  from DEST_IP ,   they need to be able to select "NOT EQUAL TO" along with entering "10.0.0.0\8"  for this ask.  Hope clear.  I tried creating MULTISELECT input box as follows but in MULTISELECT box, it does not let a user enter/type any data that they want to manually filter .   Any assistance will be highly appreciated.    
Assuming your lookup file containing the user ids has the column name  "Account_Name"  which matches the field name in the windows events,  you can do something like this:   index=wineventlog s... See more...
Assuming your lookup file containing the user ids has the column name  "Account_Name"  which matches the field name in the windows events,  you can do something like this:   index=wineventlog sourcetype=wineventlog EventCode=4624 [|inputlookup my_lookup_file.csv | fields Account_Name] | stats ...... ..... ....     I verified it, it works in my env.  Just make sure the column_name / field_name in lookup is correct to based on what you want to filter on.     PS: Hit "MARK as Answer" if this solves your query.
I managed to achieve the same outcome with an alert in Splunk Cloud like this: index=my_idx path="/api/*/endpointPath" status=500 | rex field=path "/api/(?<userId>.*)/endpointPath" | fields userId... See more...
I managed to achieve the same outcome with an alert in Splunk Cloud like this: index=my_idx path="/api/*/endpointPath" status=500 | rex field=path "/api/(?<userId>.*)/endpointPath" | fields userId | stats count by userId | eventstats sum(count) as totalCount | eval percentage=(count/totalCount) | where percentage>0.05 | sort -count
index=_internal source=metrics.log* group=tcpin_connections hostname=<the UF or the server you're checking> should work
Hi,   I'm pretty new to Splunk and I have a simple question that maybe one of you guys could help me figure out. I have a search that I'm using to find the latest login events for a specific set ... See more...
Hi,   I'm pretty new to Splunk and I have a simple question that maybe one of you guys could help me figure out. I have a search that I'm using to find the latest login events for a specific set of users. The problem is that there are about 130 users and I tried specifying the users in the search using (Account_Name=user1 OR Account_Name=user2 OR Account_Name=user3.......) I tried entering all 130 but it didn't work I noticed there was a limit after some point, and then I'd stop receiving results. So I did some research and I noticed people mentioned lookup files. So I created a CSV file with the list of actual users that I'd like to run a report on. how can I join the lookup file to the query so I'm only joining the values from the "UserID" field in my lookup table to the field "Account_Name" that comes with the windows event logs that I'm using to build the query. So far this is my query how could I use the lookup to assist to only filter the 130 users.    index=wineventlog sourcetype=wineventlog EventCode=4624 Account_Name!=*$ | stats latest(_time) as last_login_time by Account_Name | convert ctime(last_login_time) as "Last Login Time" | rename Account_Name as "User" | sort - last_login_time | table User "Last Login Time"
Hey @abow i don’t think that can work.
Hi, I'm exploring a way to get the search results for the name of Indexes, who created those indexes and creation date. So far I have got the DDAS Retention Days, DDAS Index Size, DDAA Retention ... See more...
Hi, I'm exploring a way to get the search results for the name of Indexes, who created those indexes and creation date. So far I have got the DDAS Retention Days, DDAS Index Size, DDAA Retention Days, DDAA Usage, along with the Earliest and Latest Event Dates. I'm trying with the owner of the indexes but am not getting the desired results. The search query I've been using is given below: | rest splunk_server=local /servicesNS/-/-/data/indexes | rename title as indexName, owner as creator | append [ search index=summary source="splunk-storage-detail" (host="*.personalsplunktesting.*" OR host=*.splunk*.*) | fillnull rawSizeGB value=0 | eval rawSizeGB=round(rawSizeBytes/1024/1024/1024,2) | rename idxName as indexName ] | append [ search index=summary source="splunk-ddaa-detail" (host="*.personalsplunktesting.*" OR host=*.splunk*.*) | eval archiveUsage=round(archiveUsage,2) | rename idxName as indexName ] | stats latest(retentionDays) as "Searchable Storage (DDAS) Retention Days", latest(rawSizeGB) as "Searchable Storage (DDAS) Index Size GB", max(archiver.coldStorageRetentionPeriod) as "Archive Storage (DDAA) Retention Days", latest(archiveUsage) as "Archive Storage (DDAA) Usage GB", latest(ninetyDayArchived) as "Archived GB Last 90 Days", latest(ninetyDayExpired) as "Expired GB Last 90 Days" by indexName | append [ | tstats earliest(_time) as earliestTime latest(_time) as latestTime where index=* by index | eval earliest_event=strftime(earliestTime, "%Y-%m-%d %H:%M:%S"), latest_event=strftime(latestTime, "%Y-%m-%d %H:%M:%S") | rename index as indexName | fields indexName earliest_event latest_event ] | stats values("Searchable Storage (DDAS) Retention Days") as "Searchable Storage (DDAS) Retention Days", values("Searchable Storage (DDAS) Index Size GB") as "Searchable Storage (DDAS) Index Size GB", values("Archive Storage (DDAA) Retention Days") as "Archive Storage (DDAA) Retention Days", values("Archive Storage (DDAA) Usage GB") as "Archive Storage (DDAA) Usage GB", values(earliest_event) as "Earliest Event", values(latest_event) as "Latest Event", values(creator) as "Creator" by indexName Please can anyone help me on this? Thanks in advance!  
Macros are expanded before the resultant SPL is parsed and executed which probably means that macros stored in a lookup are not expanded.
If the average is in a field, it should be available for you to use as an overlay. Which version of Splunk are you using? (Dashboard Studio is still under development so some features may be working ... See more...
If the average is in a field, it should be available for you to use as an overlay. Which version of Splunk are you using? (Dashboard Studio is still under development so some features may be working in earlier versions. Make sure you are on the latest version.)
Here is a sample of the data posted to the TCP connection: { "time": 1728428019, "host": "x.x.x.x", "fields": { "metric_name:x.x.x.x.ds.bIn": 1111, "metric_name:x.x.x.x.ds.bOut": 2222 } }  
If you can explain the algorithm for determining how the url is to be changed, we might be able to help you - currently your requirement is too vague.
But it is not included in the chart command so it isn't in your results and therefore not available to be shown as an overlay. You may need to find a way to calculate the value after the chart command.
Just re-evaluate Week after the stats command to be current week, current week -1 and current week -2 as appropriate
Hi,   We are trying to get metrics into Splunk using TCP, so far we have tried the following:   inputs.conf [tcp://44444] connection_host = ip index = metrics_idx sourcetype = "json_no_timest... See more...
Hi,   We are trying to get metrics into Splunk using TCP, so far we have tried the following:   inputs.conf [tcp://44444] connection_host = ip index = metrics_idx sourcetype = "json_no_timestamp" or "_json" or "metrics_csv"   We can get this to work if we change sourcetype to statd and emulate the statd protocol, but we found this to be very limited.   We have 30 odd machines collecting "1000s" of data endpoints (mainly counters - was 5 things, now 12) - what would be the best way to get this into Splunk, without using JSON/CSV files...   Thanks !
So how would this look? You can only sort in an particular order of precedence i.e. 30days first then if they are equal, 90days, then if still equal 1 day, you know that right?
The "use existing data input" feature allows you to reuse the checkpoint for any other input or the same input where you want to start/resume from the last stored checkpoint.
Hi I'm wondering if it's possible to define and execute a macro from a lookup.  I have an index with several (about 50) user actions, which aren't named in a user friendly manner.  Additionally, eac... See more...
Hi I'm wondering if it's possible to define and execute a macro from a lookup.  I have an index with several (about 50) user actions, which aren't named in a user friendly manner.  Additionally, each action has different fields, which I'd like to extract using inline rex queries.  In short, I'd like a table with the following: Time UserName Message 10:00 a.m. JohnDoe This is action1.  Details for action1. 10:01 a.m. JohnDoe This is action2.  Details for action2. 10:02 a.m.  JohnDoe This is action3.  Details for action3.   I know can define a friendly name for the action using a lookup.  I can also do the rex field extractions and compose a details field using a macro for each action.  However, is there a way to also rex the fields and define the details in a lookup?   I was thinking of creating a lookup like this: Action FriendlyDescription MacroDefinition action1 "This is action1" | rex to extract fields for action1 | eval for Details for action1 action2 "This is action2" | rex to extract fields for action2 | eval for Details for action2 action3 "This is action3" | rex to extract fields for action3 | eval for Details for action3   I was thinking about something like this:   index=MyIndex source=MySource | lookup MyLookup.csv ActionId OUTPUT FriendlyDescription, MacroDefinition `code to execute MacroDefinition` |table _time, UserName, FriendlyDescription, Details for action   I'm not sure if i'm barking up the wrong tree, but the reason I'd like to do this in one place (a lookup) instead of 50 different macro definitions.  It'd be neat to have all the code in one place. Thanks!            
ok i think i have the formula worked out and now i can get my average and totals on one chart. i suspect, however, that you cant overlay a line chart on a stacked column chart as now that i am able t... See more...
ok i think i have the formula worked out and now i can get my average and totals on one chart. i suspect, however, that you cant overlay a line chart on a stacked column chart as now that i am able to add my average to my stacked column, it appears, but appears as one of the stacked items in each column and not as a line  as a line overlay (the large orange stacked counts are the average)
@sainag_splunk what does this error mean?  
@sainag_splunk  Thank you for the prompt response!