All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I just installed the Windows TA.  Now, when I search, I get the following errors: Could not load lookup=LOOKUP-action_for_win_timesync_status Could not load lookup=LOOKUP-app4_for_windows_security... See more...
I just installed the Windows TA.  Now, when I search, I get the following errors: Could not load lookup=LOOKUP-action_for_win_timesync_status Could not load lookup=LOOKUP-app4_for_windows_security Could not load lookup=LOOKUP-app_for_windows_system_ias Could not load lookup=LOOKUP-vendor_info_for_microsoft_dhcp Could not load lookup=LOOKUP-vendor_info_for_windows_security Could not load lookup=LOOKUP-vendor_info_for_windows_system Could not load lookup=LOOKUP-vendor_info_for_windowsupdatelog These are all automated lookups in props.conf.  The lookups don't exist.  There are no saved searches in the TA that would create them. How is it possible that the Windows TA (and Linux TA for that matter), which are two of the most downloaded apps in the Splunkbase, are so broken?  I can't believe your company can find the time to build useless apps for the Apple Watch, but you can't nail the two most widely utilized apps for getting data into Splunk. Shame on you.
hi all, Has anyone able to get the upgrade ufw app for windows to work?  I get a message in the logs saying it started, but nothing else, and I don't see any other logs.  My MSI is in the \static ... See more...
hi all, Has anyone able to get the upgrade ufw app for windows to work?  I get a message in the logs saying it started, but nothing else, and I don't see any other logs.  My MSI is in the \static directory, and I've tried the following as variables: App is:  https://splunkbase.splunk.com/app/5003/ set "UPGRADEVER=8.0.4" set "UPGRADEFILE=static\splunkforwarder-8.0.4-767223ac207f-x64-release.msi" also this:set "UPGRADEVER=8.0.4" set "UPGRADEFILE=splunkforwarder-8.0.4-767223ac207f-x64-release.msi" The logs show that the job ran, but nothing has changed.  No further details in any other logs.  
Morning, everyone, Thank you in advance for your help. I would like to remove a part of a character from my results. My query results look like this: TRERY\j2874ac TRERY\k5846de I'd like to del... See more...
Morning, everyone, Thank you in advance for your help. I would like to remove a part of a character from my results. My query results look like this: TRERY\j2874ac TRERY\k5846de I'd like to delete the "TRERY\" to get it: j2874ac k5846de How do I proceed? Thank you very much.
I have a lookup table with Scheduled Tasks called Scheduled_Tasks, and only one column in it called "Task_Name".  This matches the "TaskName" field in my events. I need to do a search where I only d... See more...
I have a lookup table with Scheduled Tasks called Scheduled_Tasks, and only one column in it called "Task_Name".  This matches the "TaskName" field in my events. I need to do a search where I only display results where the TaskName field in events DOES NOT contain a value in the Scheduled_Tasks lookup table.  I've looked at almost every question/answer on this topic and came up with this , however it is not excluding anything I have in the lookup table. What am I Doing wrong? Thank you! index=myindex EventID=4698 NOT [|inputlookup Scheduled_Tasks | fields Task_Name]
All,  I am in a transition state moving from one instance of Splunk to another. The old instance needs to stay up for a while, but I'd like to start shipping a certain subset of data (one sourcetype... See more...
All,  I am in a transition state moving from one instance of Splunk to another. The old instance needs to stay up for a while, but I'd like to start shipping a certain subset of data (one sourcetype) to the new stack as well.    Is there a way to get a universal forwarder to send all data to two separate indexers? 
Hi, I have to group set of servers based on the default search and present it in the dashboard. Based on the host field and different sourcetype. Example: say we have below server list Prod co... See more...
Hi, I have to group set of servers based on the default search and present it in the dashboard. Based on the host field and different sourcetype. Example: say we have below server list Prod contains  server1, server2, server3. QA  contains server4, server5, server6. DEV  contains server7, server8, server9. sourcetype: access_logs, catalina_logs. Need to group these servers based on host and sourcetype.
I'me getting the following error if the scheduled emails for PDF delivery for my dashboard. It only occurs in emails, not when I export as PDF.  ERROR sendemail:1167 - An error occurred while gener... See more...
I'me getting the following error if the scheduled emails for PDF delivery for my dashboard. It only occurs in emails, not when I export as PDF.  ERROR sendemail:1167 - An error occurred while generating a PDF: Failed to fetch PDF (status = 400): b'Unable to render PDF. Bailing out of Integrated PDF Generation. Exception raised while preparing to render "Untitled" to PDF. a bytes-like object is required, not \'str\'
Hello everyone, Hoping for some feedback. My goal is to make a certain glass table everyone's default view within ITSI. So far, the closest I have come to doing this is the following. This gives me ... See more...
Hello everyone, Hoping for some feedback. My goal is to make a certain glass table everyone's default view within ITSI. So far, the closest I have come to doing this is the following. This gives me the menu title, I can click on it and my table comes up, the problem is the default="true" tag is not being honored.  Nav: <!--Copyright (C) 2005-2019 Splunk Inc. All Rights Reserved.--> <nav color="#474444"> <a href="/app/itsi/glass_table_editor_beta?savedGlassTableId=438a9a30-8406-11ea-9dad-f0d4e2e5510c" default="true" >My Insight</a> <collection label="Service Analyzer"> <view name="homeview"/> <view name="saved_homepage_lister"/> </collection> ...... ...... ......   I did find this answer: https://community.splunk.com/t5/Archive/how-to-set-href-xxx-as-the-default-dashboard/td-p/271923  (his workarounds do not work for me) but it is very old so I was hoping there were more options available at his point.
(I am reposting this question from email, with permission from the person who emailed) I need to basically join 3 indexes where the ‘join’ info is in a 4th index. The 3 former indexes have around ... See more...
(I am reposting this question from email, with permission from the person who emailed) I need to basically join 3 indexes where the ‘join’ info is in a 4th index. The 3 former indexes have around 50000 entries while the 4th index around 500000. The fields in the indexes are: Indexes containing the data: Index A, B, C, …:  name status id Index containing relationships:  Index REL:        id parent child The parent and child values in REL match the id value in A, B and C. And also note that the id values in A, B and C never collide, in other words the "single id space" of the REL events has no overlaps across the entities in A, B, C From A to B is normally a 1 to many relation, and from B to C is a many-to-many relation. The specific use case here is that A represents Applications, B represents Deployments of those applications, and C represents Servers implementing those deployments. A server can be used for several deployments hence the many-to-many relation here.   The obvious way would be to do something like:       index=A | rename id as A_id, name as Aname, status as Astatus | join max=0 type=left A_id [ | search index=REL | rename parent as A_id, child as B_id ] | join max=0 type=left B_id [ | search index=B | rename is as B_id, name as Bname, status as Bstatus ] | join max=0 type=left B_id [ | search index=REL | rename parent as B_id, child as C_id ] | join max=0 type=left C_id [ | search index=C | rename id as C_id, name as Cname, status as Cstatus ] | table Aname Astatus Bname Bstatus Cname Cstatus       This, of course, fails miserably because the join only returns 10000 results while the REL index has 400000 events… I can rewrite the first join as:       index IN(A REL) | eval parent=if(index=REL, parent, id), child =if(index=REL, child, id) | stats values(name) as Aname values(statu) as Astatus values(child) as childs by parent | table Aname Astatus childs       But I’m at a loss how to squeeze in the other indexes and relation… And I also have some hope that there's a way to avoid join entirely. UPDATE: Here is a run-anywhere search to fabricate some sample input rows | makeresults | fields - _time | eval data="1,A,appl1,,;2,A,appl2,,;3,D,depl1,,;4,D,depl2,,;5,D,depl3,,;6,S,serv1,,;7,S,serv2,,;8,S,serv3,,;9,S,serv4,,;10,R,,1,3;11,R,,2,4;12,R,,2,5;13,R,,3,6;14,R,,4,7;15,R,,5,8;16,R,,5,9", data=split(data, ";") | mvexpand data | rex field=data "(?<sys_id>[^,]*),(?<type>[^,]*),(?<name>[^,]*),(?<parent>[^,]*),(?<child>[^,]*)" | fields sys_id type name parent child | eval parent=if(parent=="",null(),parent), child=if(child=="",null(),child) And the desired output,  is rows that map app1 to depl1 and serv1   and that map app2 to depl2,depl3 and serv2,serv3,serv4
I have a lookup table consisting of both CIDR addresses and regular x.x.x.x addresses under the field named "IP_Address". The lookup definition has  Match type:  <CIDR>(<IP_Address>) I need to creat... See more...
I have a lookup table consisting of both CIDR addresses and regular x.x.x.x addresses under the field named "IP_Address". The lookup definition has  Match type:  <CIDR>(<IP_Address>) I need to create a search against data model that only shows events where the src in the events matches the IP_Address field in the lookup table.   I will also add additional fields from the events to the search results: _time, action, IP_Address/src, dest, dest_port.  I would also like to add an additional field from the lookup table called "Date Blocked". I have this so far for the first part, but it is not returning any results. Any suggestions? Thank you in advance     from datamodel:"Network" |where [inputlookup Blocked_IPs | rename src as IP_Address | fields IP_Address ]| table _time, action, IP_Address, dest, dest_port    
Hi, I'm using eventgen to create sample data. Whenever someone runs a command, the Linux audits will record the event over multiple lines. For example, if someone uses sudo to run 'cat /etc/shadow... See more...
Hi, I'm using eventgen to create sample data. Whenever someone runs a command, the Linux audits will record the event over multiple lines. For example, if someone uses sudo to run 'cat /etc/shadow'. The audit log will record the user's attempt to access sudo, then another line that will show the authentication status (success or failure), then the actual command, '/etc/shadow', etc... Is there a way to set the token replacement to change the username, hostname, time, command for that event and do it, say 30 times. Each event which has multiple lines with have the same username, hostname, time and command. Then the next event will have a different username, hostname, time and command? Thanks, Bruce
Raw Value:   logtype=audit 2020-06-15T12:25:52,650| tid:SDFGH3456gtbhjcfdt$%| AUTHN_REQUEST| | 38 | | 123| | asdxc| AS| ss| | | 40   Query:   index = "PF.log" | eval fields=split(_raw,"... See more...
Raw Value:   logtype=audit 2020-06-15T12:25:52,650| tid:SDFGH3456gtbhjcfdt$%| AUTHN_REQUEST| | 38 | | 123| | asdxc| AS| ss| | | 40   Query:   index = "PF.log" | eval fields=split(_raw,"|") | eval response=mvindex(fields,13) | timechart values(response) BY host   I am interested in the last value which is 40 in this example. I tried converting the value tonumber and tried other conversion techniques which doesn't seem to work for some reason. 
I have a data model that has grown quite large, over 7TB for Network Sessions. Its set to 3 months accelerated. I want to change it 7 days but I'm not sure it will remove the rest of the data summari... See more...
I have a data model that has grown quite large, over 7TB for Network Sessions. Its set to 3 months accelerated. I want to change it 7 days but I'm not sure it will remove the rest of the data summaries stored in storage. We only need 7 days of work of accelerated summaries. Will me changing the timespan to reduce the number of summaries consuming storage? Thanks for the assistance!
Hi! I have a project to enhance a dashboard for a client, and wanted to know how best to start. I need to provide a function in the dashboard that will automatically show datasets that are owned b... See more...
Hi! I have a project to enhance a dashboard for a client, and wanted to know how best to start. I need to provide a function in the dashboard that will automatically show datasets that are owned by a particular user when they log in to use that dashboard. To clarify- the main dashboard is used by a variety of people, and I'm wanting to figure out if there's a way to program the dashboard to automatically display datasets owned by any user specific that is logged into and using the dashboard. I feel like it's possible but have no idea of where to start. Any advice would be greatly appreciated, thank you!
I've got this in my default/setup.xml:   <setup> <block title="F5 API Gateway Key" endpoint="storage/passwords" entity="_new"> <text>Enter F5 API Gateway Key User</text> <input fie... See more...
I've got this in my default/setup.xml:   <setup> <block title="F5 API Gateway Key" endpoint="storage/passwords" entity="_new"> <text>Enter F5 API Gateway Key User</text> <input field="gw_user"> <label>F5 API Gateway Key Username (enter gw_key as the username)</label> <type>text</type> </input> <text>Enter F5 API Gateway Key</text> <input field="gw_key"> <label>F5 API Gateway Key</label> <type>password</type> </input> </block> <block title="Username/Password" endpoint="storage/passwords" entity="_new"> <text>Enter Username</text> <input field="name"> <label>Username</label> <type>text</type> </input> <text>Enter Password</text> <input field="password"> <label>Password</label> <type>password</type> </input> </block> </setup>   I get this error when I enter data and try to save it:   Encountered the following error while trying to update: Error while posting to url=/servicesNS/nobody/TA-kp_f5_api_inputs/storage/passwords/   I'm not certain why I'm getting the error or how to resolve it.  Any suggestions? TIA Joe
Collecting summary data into a summary index every hour. Data will only have events at 8am, 9am, 10am etc. On my timechart panel Splunk still shows 8:10, 8:20, 8:30 etc and my data bars are spaced ou... See more...
Collecting summary data into a summary index every hour. Data will only have events at 8am, 9am, 10am etc. On my timechart panel Splunk still shows 8:10, 8:20, 8:30 etc and my data bars are spaced out really far. Tried span=1h but it still displays this way. Only going to be 24 values a day for each application I chart. How can I get these bars to show closer together, use a wider bar for each value? Dont like the super skinny bar width and spacing. Thanks!  
Hello My _internal index has grown past the 30 days of retention to 360+ days. This is due to future timestamps in the data keeping them from rolling. I want to just remove the entire index from th... See more...
Hello My _internal index has grown past the 30 days of retention to 360+ days. This is due to future timestamps in the data keeping them from rolling. I want to just remove the entire index from the cluster. My plan was to make the changes on the cluster master, set frozen time secs like 60 and set maxHotSpanSecs to 3600(1 hr). and push them out Once the cluster restarts I will remove the settings and push again. Does this sound like a good plan? Will this remove all of the data including the future data? Thanks for the thoughts!
Hello, We're trying to use this add on to get MS Teams Call Quality data into Splunk. However, there is a persistent issue -- we're not able to subscribe to Graph change notifications. The error we... See more...
Hello, We're trying to use this add on to get MS Teams Call Quality data into Splunk. However, there is a persistent issue -- we're not able to subscribe to Graph change notifications. The error we get is  2020-06-15 14:59:01,505 ERROR pid=13270 tid=MainThread file=base_modinput.py:log_error:309 | Could not create subscription: 400 Client Error: Bad Request for url: https://graph.microsoft.com/v1.0/subscriptions When tried through Postman or curl, the error is  { "error": { "code": "InvalidRequest", "message": "The underlying connection was closed: An unexpected error occurred on a send.", "innerError": { "date": "2020-06-15T15:08:45", "request-id": "bbd8c734-15e4-4439-b2bc-be49ba2a6335" } } }   Anyone facing a similar error?  @jconger 
Hello Team, Here is my requirement: I have to check the application running status, which is installed in Linux server.  For this, I have a log generated by the application, which might not conta... See more...
Hello Team, Here is my requirement: I have to check the application running status, which is installed in Linux server.  For this, I have a log generated by the application, which might not contain the continuous-time intervals. The log will get updated when the user is using the app. In the log, I have 3 high priority exceptions: TransactionRolledbackException, WIMSystemException, ConnectionWaitTimeoutException. When any of these exceptions occurred in the log, the status should be "DOWN". If any other exceptions occur, then the status should be "WARNING, and if no exception, it should show "OK". Also once the high priority exception occurs, we will notify the users by email alert. After the email alert, it would be cleared then the next events will generate. once the next event generates and does not contain any high priority exceptions, then the status should be shown in the dashboard as "OK" and low priority exceptions, warning. And if the latest event contains exception again, then "DOWN".  Noe: when the application is down in real time, the log will not generate. Here are my sample codes but not satisfied with the results: 1. index=myIndex sourcetype=mySourcetpe | stats count as Total earliest(_time) as start_time latest(_time) as latest_time earliest(_raw) as Earliest_Event latest(_raw) as Latest_Event by _time | eval stop=strptime(stop, "%m/%d/%Y") | eval Earliest_Count= Total - 1 | eval Latest_Count= Total + 1 | eval status=case(((Latest_count > Total) AND match(_raw, "TransactionRolledbackException")), "Down",((Latest_count > Total) AND match(_raw, "WIMSystemException")), "Down",((Latest_count > Total) AND match(_raw, "ConnectionWaitTimeoutException")), "Down",((Latest_count > Total) AND match(_raw, "\w+Exception")), "Warning", 1!=2, "OK") | stats count by status 2.  index=myIndex sourcetype=myscourcetype | eval status=case( match(_raw, "TransactionRolledbackException"), "Down", match(_raw, "WIMSystemException"), "Down", match(_raw, "ConnectionWaitTimeoutException"), "Down", match(_raw, "\w+Exception"), "WARNING" , 1!=2, "OK") | timechart count by status Any Help or suggestion would be really appreciated!! Thanks!
Dear All, I am facing the same issue after done the email server setting. After setting the alert I am not able to trigger it. My mail server is running  on  SSL version: index=_internal source=... See more...
Dear All, I am facing the same issue after done the email server setting. After setting the alert I am not able to trigger it. My mail server is running  on  SSL version: index=_internal source=*python.log 2020-06-15 15:48:01,409 +0000 ERROR sendemail:475 - [Errno -2] Name or service not known while sending mail to: boo@bar.com   Pls advice and how to fix this issue. Thanks  Ram