All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey All, I am working on UI piece and trying to figure out best way to create following UI component using splunk/react-ui and existing splunk libraries.   <header> if <input-textbox1> assign <inp... See more...
Hey All, I am working on UI piece and trying to figure out best way to create following UI component using splunk/react-ui and existing splunk libraries.   <header> if <input-textbox1> assign <input-textbox2> [x] [add-row-button] // when [add-row-button] is clicked, looking forward to create both text-boxes.
Hello, I need have some windows logs that come in via forwarders that contain an IP address that I need to do a reverse lookup on.  You can easily do this at search time, however the the IP addresse... See more...
Hello, I need have some windows logs that come in via forwarders that contain an IP address that I need to do a reverse lookup on.  You can easily do this at search time, however the the IP addresses in the log are DHCP and frequently change.  So i need to insert a field for the name at index time.  Is this possible?  I have read a previous post that said  there was not a way but it was an older post so I was wondering if it may be possible now? Thanks  
Hello, I have a lookup  file which contains 10 service names . In my dashboard i have a drop down for those services . I have included lookup file in the drop down.  That works fine. but i need to s... See more...
Hello, I have a lookup  file which contains 10 service names . In my dashboard i have a drop down for those services . I have included lookup file in the drop down.  That works fine. but i need to show 10 panels (because lookup file has 10 entries as of now ) when i selected " ALL " option in the drop down. I have added " ALL " option as default . Now it is showing only one panel when ALL was selected. but i need to show 10 panels when "ALL" option was selected in the drop down.       </input> <input type="dropdown" token="service" searchWhenChanged="true"> <label>Services</label> <choice value="*">ALL</choice> <default>*</default> <initialValue>*</initialValue> <fieldForLabel>user</fieldForLabel> <fieldForValue>user</fieldForValue> <search> <query>| inputlookup services_vas.csv</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="dropdown" token="host" searchWhenChanged="true"> <label>Server</label> <choice value="eudmsurfvas1">eudmsurfvas1</choice> <choice value="eudmsurfvas2">eudmsurfvas2</choice> <default>eudmsurfvas1</default> <initialValue>eudmsurfvas1</initialValue> </input> </fieldset> <row> <panel> <title>Downtime for Service - $service$</title> <single> <title>Service stop time - ( $stop$) and Service restored time -( $restore$ )</title> <search> <query>index=surf host=$host$ user=$service$ | streamstats current=f last(_time) as LastTime by user | eval delay=LastTime-_time | table delay , LastTime , _time | where delay &gt; 300 | stats latest(delay)</query> <earliest>$selectedTime.earliest$</earliest> <latest>$selectedTime.latest$</latest> </search> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="rangeColors">["0x53a051","0x006d9c","0xdc4e41"]</option> <option name="rangeValues">[300,600]</option> <option name="underLabel">Downtime in secs</option> <option name="useColors">1</option> </single> </panel> </row>       Thank you
Hi, I am trying to put together a table like this - Need to calculate the max TPM, max response time and average response time url max_per_hour total_count AverageRespTime MaxRespTime /tes... See more...
Hi, I am trying to put together a table like this - Need to calculate the max TPM, max response time and average response time url max_per_hour total_count AverageRespTime MaxRespTime /test1 314 1514     /test2 777 2876                 My peak hour count and total count are coming but AverageRespTime and MaxRespTime are blank. I want to calculate this for last 30 days and below is the query I am using -   index=myindex sourcetype=access_combined_wcookie status=200 | bucket span=1h _time | stats count as hour_count by _time url | stats max(hour_count) as max_per_hour sum(hour_count) as total_count avg(time_serve) as AverageRespTime max(time_serve) as MaxRespTime by url   Can someone advice what I am doing wring here OR may be some other way to achieve this task? NOTE - time_serve field is available in my interesting fields.
Hi, We are running Splunk Enterprise version 7.2.9.1 on our Splunk environment. We installed and configured Splunk add-on for Microsoft Office 365 (splunk_ta_o365) version 2.0.1 and were getting da... See more...
Hi, We are running Splunk Enterprise version 7.2.9.1 on our Splunk environment. We installed and configured Splunk add-on for Microsoft Office 365 (splunk_ta_o365) version 2.0.1 and were getting data successfully until few days ago.  The error that we see is the following:   2020-06-16 10:33:31,183 level=ERROR pid=12626 tid=MainThread logger=splunk_ta_o365.modinputs.management_activity pos=utils.py:wrapper:67 | datainput="splunkqa_ma_auditexchange" start_time=1592318010 | message="Data input was interrupted by an unhandled exception." ... SSLError: HTTPSConnectionPool(host='login.microsoftonline.com', port=443): Max retries exceeded with url: /12345678c-1234-5ab6-9ab12-a12bc34de5fg/oauth2/token (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:741)'),))   Will appreciate your guidance on how to handle this kind of errors for Splunk add-on for MS Office 365.
I am creating a splunk dashboard pie chart panel and the values I am displaying are too large (long strings) to be displayed in a small panel. Is it possible to display the chart values in a readable... See more...
I am creating a splunk dashboard pie chart panel and the values I am displaying are too large (long strings) to be displayed in a small panel. Is it possible to display the chart values in a readable / concise labels? For e.g., If the below is my sample query: index=network sourcetype=logserver | stats dc(Field.ID) by Field.Message   If Field.Message is "All transfers were successful", I want to display just "Success" in the pie chart If Field.Message is "Some transfers failed", I want to display just "Failed" and so on..   What is the best way to achieve this? Thanks in advance  
Good afternoon. After Splunk upgrade from 7.3.0 to 8.0.4 version in PDF reports instead Cyrillic words appears different symbols in using Cyrillic. On 7.3.0 version it helped in splunk / share / spl... See more...
Good afternoon. After Splunk upgrade from 7.3.0 to 8.0.4 version in PDF reports instead Cyrillic words appears different symbols in using Cyrillic. On 7.3.0 version it helped in splunk / share / splunk / fonts / tahoma.ttf. It doesn't work in current time but with permissions for splunk / share / splunk / fonts / tahoma.ttf everything is ok.
I have an index with certain field values. I want to be notified when specific field value changes, I am aware of using stream stats change_on_reset=true by that field name. But the challenge here i... See more...
I have an index with certain field values. I want to be notified when specific field value changes, I am aware of using stream stats change_on_reset=true by that field name. But the challenge here is I am not sure when that field has received last value. Suppose I have field X with 1,2,3 at Y time on Z day. Again, That field X got 4,5 values at P time on Q day. Here, I will not be aware of the day(Z) and time(Y) values to look back certain time range for that field values. To summarize, If am aware  of time range to select like "last 7 days" or "last 24 hours", then I can see if that field value changed or not through stream stats. But here there is no definite time period flow of events in the index. Hence We are not sure when and what is the last event stored in that particular field. Can someone  help me on this requirement?
WARN UserManagerPro - AQR not supported and user=username@domain.com information not found in cache or 404 User not found C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\metadata\local.meta ... See more...
WARN UserManagerPro - AQR not supported and user=username@domain.com information not found in cache or 404 User not found C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\metadata\local.meta   When trying to delete inputs created in dbconnect, splunk was not able to authenticate the user via our IDP. To workaround this, edit local.meta, find the input to be deleted and change the owner =  username@domain.com  to owner = nobody.    Restart the splunkd service.       
Facing issue with latest lookup file editor version 3.4.2 after upgrading from previous version . Unable to save existing lookup file or manually create new lookup file , It keeps on showing "Saving.... See more...
Facing issue with latest lookup file editor version 3.4.2 after upgrading from previous version . Unable to save existing lookup file or manually create new lookup file , It keeps on showing "Saving.." . but file is not being saved . Please suggest . checked on both Splunk Enterprise Version is 7.3.3 and 7.3.4 @LukeMurphey 
Hello, I would like a support for a query to compare the values ​​of the last 30 minutes, if it is below 80% of the volume, generate another column in red or exceed the limit. Ex: index="txt" "Re... See more...
Hello, I would like a support for a query to compare the values ​​of the last 30 minutes, if it is below 80% of the volume, generate another column in red or exceed the limit. Ex: index="txt" "Retrieving message #" | timechart span=30m count as server Command Result: _time server 2020-06-16 08:00:00 857 2020-06-16 08:30:00 1605 2020-06-16 09:00:00 4507 2020-06-16 09:30:00 4666 2020-06-16 10:00:00 3798 In this case, the first two volumes were below expectations.  
Scenario: I have simulated an attack from PC1 to PC2 which has generated logs on both machines as below. Now want to create an alert where both events ID are captured in splunk in time frame of 30 se... See more...
Scenario: I have simulated an attack from PC1 to PC2 which has generated logs on both machines as below. Now want to create an alert where both events ID are captured in splunk in time frame of 30 sec. PC1 Log source: Windows Log Event id = 4648 PC1 Log source: Windows  Log Event id = 4624   Thanks
I recently installed splunk on a server with license master uri. Now when I check slaves endpoint(servicesNS/nobody/system/licenser/slaves), it shows the server and it's GUID with active_pool_id as ... See more...
I recently installed splunk on a server with license master uri. Now when I check slaves endpoint(servicesNS/nobody/system/licenser/slaves), it shows the server and it's GUID with active_pool_id as unallocated.   Where as when I check pools endpoint for 'unallocated' pool, it doesnt have this guid. Whats the difference between these two and why its not shown under 'unallocated' pools endpint?
Hey! So I am trying to hand the Kalman filter in Splunk's MLTK a dynamic value for the period which I first find through Auto correlation Function in the sub search and is named corr_lag:     inde... See more...
Hey! So I am trying to hand the Kalman filter in Splunk's MLTK a dynamic value for the period which I first find through Auto correlation Function in the sub search and is named corr_lag:     index = cisco_prod | timechart span=1h count as logins_hour |eval corr_lag= [`ACF_Correlation_Lag`]| predict "logins_hour" as prediction algorithm=LLP holdback=200 future_timespan=368 period=corr_lag upper95=upper95 lower95=lower95 | `forecastviz(368, 200, "logins_hour", 95)`   the sub search looks as follows:   search index = cisco_prod | timechart span=1h count as logins_hour | fit ACF logins_hour k=200 fft=true conf_interval=95 as corr | top limit=2 acf(corr),Lag | stats max(Lag) as corr_lag | return $corr_lag   Somehow I must do something wrong because I always get the following error:   command="predict", Invalid period : 'corr_lag'   the subsearch actually works fine and gives me the right period back.  Can somebody help me find the right way to do this?  Thanks!  
Hi Everyone, We have two indexed fields in Index A. We are doing some calculations using  tstats and collecting to another summary index B Those two fields will act as indexed fileds or not in Summ... See more...
Hi Everyone, We have two indexed fields in Index A. We are doing some calculations using  tstats and collecting to another summary index B Those two fields will act as indexed fileds or not in Summary Index?    
In our infra we collect the logs with paloalto, epo, proxy ..., everything works fine except the log collection of Palo Alto.   we changed the queue module (queue.type = "LinkedList") and we moved to... See more...
In our infra we collect the logs with paloalto, epo, proxy ..., everything works fine except the log collection of Palo Alto.   we changed the queue module (queue.type = "LinkedList") and we moved to the direct queue (queue.type = "Direct") but it's not good enough. So we want to know if it fits or not or is it a problem of conf that we have not seen? -------------------------------------------------------------------------
Hi All... For those who already know some SQL, the join commands are pretty easy. Some of my teammates who are non-sql members, they were not aware of join, and when they try to read docs, they coul... See more...
Hi All... For those who already know some SQL, the join commands are pretty easy. Some of my teammates who are non-sql members, they were not aware of join, and when they try to read docs, they could not understand easily. Hence i thought to create this post for all. Thanks.
Hi Team, We have purchased 200 GB of licenses for Splunk Cloud subscription and another 200 GB of licenses for Enterprise Security Subscription for our organzation. So currently we are calculating ... See more...
Hi Team, We have purchased 200 GB of licenses for Splunk Cloud subscription and another 200 GB of licenses for Enterprise Security Subscription for our organzation. So currently we are calculating the license usage for Splunk Cloud Subscription by logging into URL the Splunk Cloud search head and navigating to Cloud Monitoring Console App. So we can able to calculate for Splunk Cloud Subscription. So we want to know how to calculate the licensing for Enterprise Security Subscription? i.e. How many GB we are ingesting for the Enterprise Security subscription in a day out of 200 GB? How to check it? And in the upcoming future we are planning to buy additional license for Splunk Cloud subscription so during that time will it be mandate to buy additional license for Enterprise Security subscription along with Splunk Cloud subscription? Also I believe we are not utilizing the full GB of license for Enterprise Security Subscription so will it be possible to move those license to Splunk Cloud subscription?
While following the snort 3 manual, after putting the license of splunk on free option, the password protection is gone. Now any computer of my internal and probably external network can log into my ... See more...
While following the snort 3 manual, after putting the license of splunk on free option, the password protection is gone. Now any computer of my internal and probably external network can log into my 8000 port without any protection. how to put a password protection on this port?
I have a Python script which will take input file as .log and produces .csv files. I used to upload these .csv files in Splunk and process them to create charts and Statistics table.  I wanted to kn... See more...
I have a Python script which will take input file as .log and produces .csv files. I used to upload these .csv files in Splunk and process them to create charts and Statistics table.  I wanted to know if there is any option where I upload all .log files to splunk and the Python script will run on the .log files which are uploaded to splunk and generate .csv files which are automatically uploaded to splunk... In Short, a one-shot solution...like At the beginning I will just upload all .log files ...maybe with click of a button in dashboard it can create all .csv files, upload them automatically and then i can create queries to create Charts.