All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello! I have recently installed the Splunk Add-On for Microsoft Cloud Services. The Azure app and Splunk app both have the correct information/permissions to operate (or so I assume, but everything ... See more...
Hello! I have recently installed the Splunk Add-On for Microsoft Cloud Services. The Azure app and Splunk app both have the correct information/permissions to operate (or so I assume, but everything reads green). However, upon seeing no info coming in, I ran across this error trying to solve my issue: 2020-05-05 16:47:46,940 +0000 log_level=ERROR, pid=21087, tid=MainThread, file=ta_mod_input.py, func_name=main, code_line_no=200 | Microsoft Cloudservices Azure Audit task encounter exception Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/splunktaucclib/data_collection/ta_mod_input.py", line 197, in main config_cls=configer_cls, log_suffix=log_suffix) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/splunktaucclib/data_collection/ta_mod_input.py", line 126, in run loader = dl.create_data_loader(meta_config) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/splunktaucclib/data_collection/ta_data_loader.py", line 164, in create_data_loader import splunktalib.event_writer as ew File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/splunktalib/event_writer.py", line 2, in standard_library.install_aliases() File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/future/standard_library/__init__.py", line 485, in install_aliases import test File "/opt/splunk/etc/apps/SA-ldapsearch/bin/test.py", line 21, in splunkAdmin = raw_input("Enter Splunk Admin: ") EOFError: EOF when reading a line I did end up looking through the Python code and, in fact, the SA-LDAPSearch app has a test.py file that grabs Admin session info on the CLI. My main immediate question is: How do I fix this error? Secondary question out of curiosity: Why do they do this? I appreciate any assistance! By the way, we are on Splunk Enterprise 7.2.6. EDIT: I should add, at no point during the installation of this app does it ask for these credentials. I installed the app both from a user with admin rights and the admin account in the GUI. EDIT 2: Some more information... the line below the event, which includes host, source, and sourcetype, also includes a "splunk_server" line that seems to only include the indexers. Would that just mean it's pulling the event from there? Or is that where the event is happening? Sorry for so many edits, I am just posting as I notice things.
Hi all I'm looking to create a count of events that a list of strings appear in. So count the number events that Item1 appears in, how many events Item2 appears in etc. The query so far looks lik... See more...
Hi all I'm looking to create a count of events that a list of strings appear in. So count the number events that Item1 appears in, how many events Item2 appears in etc. The query so far looks like this: index=[index] message IN ("Item1*", "Item2*", "Item3") | stats count by message For it to then produce the following Item1 8 Item2 10 Item3 4 However Item2 has multiple slightly different variations meaning I get a bunch of individual items in the list alongside groupings looking like the following: Item1 8 Item2 - /folder1/fileA error 1 Item2 - /folder2/fileB error 1 Issue with /folder2/fileC [Item2 error] 1 Is there a way to coalesce the results but still have the total number of events by search string? I've tried searching but haven't managed to find anything on here yet. Any help would be appreciated! Thank you
I am using Splunk cloud instance and i could not see the INVITE USER option in instance page. It disappear suddenly. I had invited many user through Invite user option but all of the sudden this op... See more...
I am using Splunk cloud instance and i could not see the INVITE USER option in instance page. It disappear suddenly. I had invited many user through Invite user option but all of the sudden this option is missing. Can someone please help me in this
I am writing a custom webhook alert action and want to include a retry function which will last more than a few retrys. To do this, I want to store the file that I am sending within the alert actio... See more...
I am writing a custom webhook alert action and want to include a retry function which will last more than a few retrys. To do this, I want to store the file that I am sending within the alert action app structure, however after reviewing the Splunk App Structure website, I am none the wiser. Should it go here: myApp appserver static file.csv Or should I create my own directory for temp files under the bin directory: myApp bin temp file.csv
I'm using a deployment server to distribute a single inputs.conf file to a number of servers in a class. The locations of the files that I need to monitor are similar between the servers, but sometim... See more...
I'm using a deployment server to distribute a single inputs.conf file to a number of servers in a class. The locations of the files that I need to monitor are similar between the servers, but sometimes (sub)directories refer to the servers instead of being generically named. This circumstance made me reach for wildcards / whitelists in determining the paths of the files to watch. (The alternative would be creating separate monitor stanzas for each individual server in the class, which defeats the purpose.) Can't get it to work, though. What am I missing? These are the directories / files on the various servers I want to monitor: /base/logs/appl/xxx.seg.ex/logfile1.log /base/logs/appl/xxx.seg.ex/logfile2.log /base/logs/appl/yyy.seg.ex/logfile1.log /base/logs/appl/yyy.seg.ex/logfile2.log And these are the monitor stanzas I'd set up in inputs.conf: [monitor:///base/logs/appl/*.seg.ex/logfile1.log] index=index [monitor:///base/logs/appl/*.seg.ex/logfile2.log] index=index Unfortunately this does not work... Checking the _internal index made clear that the monitor stanzas are not OK. Apparently implicit whitelists were added: '^\/base\/logs\/appl/[^/]*.seg.ex/logfile1.log$' (on path 'monitor:///base/logs/appl') [1] '^\/base\/logs\/appl/[^/]*.seg.ex/logfile2.log$' (on path 'monitor:///base/logs/appl') [2] The _internal index also contains logevents saying: TailingProcessor - Will not call watch on path '/base/logs/appl/xxx.seg.ex/logfile1.log due to stanza: monitor:///base/logs/appl/*.seg.ex/logfile1.log [1] TailingProcessor - Will not call watch on path '/base/logs/appl/xxx.seg.ex/logfile2.log due to stanza: monitor:///base/logs/appl/*.seg.ex/logfile1.log [2] Why doesn't this work? And how could I get it to work as desired?
Hi, I'm using expression: (?ms)book.(?\d{7}-\d) to extract some numbers from this input (thanks @to4kawa ) : " new contributors: Set(book.1272473-1, book.1272472-1, book.1272477-1), removed c... See more...
Hi, I'm using expression: (?ms)book.(?\d{7}-\d) to extract some numbers from this input (thanks @to4kawa ) : " new contributors: Set(book.1272473-1, book.1272472-1, book.1272477-1), removed contributors: Set(book.1271398-1, book.1271397-1)". This gives me all 5 numbers (1272473, 1272472, 1272477, 1271398, 1271397), but I'm interested only in numbers before keyword "removed" (1272473, 1272472, 1272477). Please bear in mind, there could be from 1 to 5 strings in "new contribution" section and I would like to extract all of them. Thanks is advance, Szymon
Any plans to have this app support in Splunk Cloud especially 7.2,7.3, and 8.x?
I would either like to send the results table as the description field to ServiceNow or be able to pass the csv results and attach it to the opened incident ticket. The goal is to work the ticket fr... See more...
I would either like to send the results table as the description field to ServiceNow or be able to pass the csv results and attach it to the opened incident ticket. The goal is to work the ticket from ServiceNow without having to go into Splunk to review the results. As of now in the description field i am passing $result.src_ip$ $result.dest_ip$ $result.threat_intel_list$ $result.threat_match_field$ $result.threat_collection$ $result.original_sourcetype$ $result.count$ but the only passes the first result of the report. Has anyone be able to pass the all the search results into a single ServiceNow ticket?
Good afternoon    I can validate in the MC which index have events and which do not, but is it possible to know which index is not being consulted by users? this would let you know that data is no... See more...
Good afternoon    I can validate in the MC which index have events and which do not, but is it possible to know which index is not being consulted by users? this would let you know that data is not being used and possibly delete it. Your support is appreciated
Hello, I have a quick question regarding creating an expressive PDF file for a customer based on existing reports and/or dashboards. The current export functionality is not the most appropriate, beca... See more...
Hello, I have a quick question regarding creating an expressive PDF file for a customer based on existing reports and/or dashboards. The current export functionality is not the most appropriate, because each panel is printed on one PDF site. I already found out some apps realize it in a some better way. PDF Report Capture for Splunk Smart PDF Exporter for Splunk My question is, does anybody have other experiences or ideas how to implement such those "projects". And does Splunk work on a better intern solution for the PDF exporting function? If so, when can we expect a solution?
I'm trying to setup a report/search so that I can get statistics on VPN users. We have a WatchGuard firewall. We're using the free version of Splunk currently, so we cannot add the WatchGuard app. ... See more...
I'm trying to setup a report/search so that I can get statistics on VPN users. We have a WatchGuard firewall. We're using the free version of Splunk currently, so we cannot add the WatchGuard app. Below are a sample of the log files. How can I setup a report or dashboard that will pull the domain (@Firebox-DB) as the field and the alias (the part before the @) as the field data? SSL VPN user xyz@Firebox-DB ** user[xyz@Firebox-DB] rcv rqst [TOsteen@Firebox-DB
Hi, I registered for the 15-day trial AppDynamics SaaS account, and have received the portal and other welcoming emails.  My account is valid, because I can log in to https://accounts.appdynami... See more...
Hi, I registered for the 15-day trial AppDynamics SaaS account, and have received the portal and other welcoming emails.  My account is valid, because I can log in to https://accounts.appdynamics.com/ to view my subscriptions, but when I follow the link from the email to https://[my_account_name].saas.appdynamics.com/controller/#/location=AD_GETTING_STARTED and enter my username and password, it keeps telling me "Login failed" without any further information. I know the username and password information in entered correctly, because it is stored in my browser and entered from there. I tried to reset the password numerous time, but I don't receive any further emails. Help would be appreciated, thanks.
Hello, I have a search where I would like to compare the count of one search result against its running weekly average. This appears to work. However I would like to replicate this search across 14... See more...
Hello, I have a search where I would like to compare the count of one search result against its running weekly average. This appears to work. However I would like to replicate this search across 14+ values for Field. I was building a dashboard with each field value as a separate report, and I couldn't help but to wonder if there was a way for me to append all the search results together. As written, I know I'll need to add an evaluation for naming the row, but it also parses incredibly slow. Both searches share the portion "foo", differing only by their Field value, is there any way to reuse the search result in parallel like this or is my general approach wrong? foo Field="Bar" earliest=-7d latest=@h | timechart span=1h count | eval StartTime=relative_time(now(), "-24h@h") | eval Series=if(_time>=StartTime, "Todays ", "Average ") | eval Hour=strftime(_time, "%H") | chart avg(count) by Hour Series | where Hour=strftime(now(), "%H") | append [search foo Field="Baz" earliest=-7d latest=@h | timechart span=1h count | eval StartTime=relative_time(now(), "-24h@h") | eval Series=if(_time>=StartTime, "Todays ", "Average ") | eval Hour=strftime(_time, "%H") | chart avg(count) by Hour Series | where Hour=strftime(now(), "%H")]
I would like to compare results from at least two tools for both vulnerabilities and malware analysis. Thanks
I'm trying to find what URLs are the same that two endpoints went to, but at different times. Example: What URLs did endpoint 1 go to between 7 and 7:15AM Monday morning, what URLs did endpoint 2... See more...
I'm trying to find what URLs are the same that two endpoints went to, but at different times. Example: What URLs did endpoint 1 go to between 7 and 7:15AM Monday morning, what URLs did endpoint 2 go to on Tuesday afternoon between 4 and 4:15PM, and list the URLs that are the same. I also have an exclusion for some common URLs like google.com. The point of this is when malicious traffic is detected from two endpoints going to the same destinations, check to see what other URLs they went to that were the same. Here is what I have but it doesn't limit the time so there are a ton of extra URLs that shouldn't be showing: index=web sourcetype=proxya filter_result=success c_ip=X.X.X.X OR c_ip=y.y.y.y NOT cs_host IN ( google.com, bing.com, etc.org ) | dedup cs_host, c_ip | eventstats count by cs_host | where count = 2 | table _time c_ip cs_host | sort -_time
Just trying to install AppDynamics for the first time. I have downloaded the agent and when I unzip I have the following structure   This screenshot shows I have 2x conf directories, 2 sets of ja... See more...
Just trying to install AppDynamics for the first time. I have downloaded the agent and when I unzip I have the following structure   This screenshot shows I have 2x conf directories, 2 sets of jars, 2 javaagents, etc. My questions are; what is this structure for, which conf and jar should I use, and what is the purpose of the second copy in a subdirectory with additional config files and src files? Thanks. ^ edited by @Ryan.Paredez to make the post easier to read. 
Hi All, Recently I have noticed that some of the our Saved Searches are failing with the errors like below, "Failed to start search for id="scheduler__abcde__Qk1TX1dNX0lOVEdfTUVUUklDUw__RMD57438... See more...
Hi All, Recently I have noticed that some of the our Saved Searches are failing with the errors like below, "Failed to start search for id="scheduler__abcde__Qk1TX1dNX0lOVEdfTUVUUklDUw__RMD57438a1f3bbe5dac6_at_1588593600_88844". Dropping failedtostart token at path=/opt/splunk/var/run/splunk/dispatch/scheduler__abcde_Qk1TX1dNX0lOVEdfTUVUUklDUw__RMD57438a1f3bbe5dac6_at_1588593600_88844 to expedite dispatch cleanup Could anyone suggest what could be the issue ?
Hi Team, i have opened an account for free trail on Splunk cloud, but the instances are not created. will it take some time to create instance ? or is there some process i need to do ? please help... See more...
Hi Team, i have opened an account for free trail on Splunk cloud, but the instances are not created. will it take some time to create instance ? or is there some process i need to do ? please help me
Hi All, here I'm trying to build a query, which produces the values of yesterday and today. earliest=-1d@d latest=@d index="summary" | stats count(count) as yesterday_count by orig_sourcetype |... See more...
Hi All, here I'm trying to build a query, which produces the values of yesterday and today. earliest=-1d@d latest=@d index="summary" | stats count(count) as yesterday_count by orig_sourcetype | appendcols [ search earliest=@d latest=now index="summary" | stats count(count) as today_count by orig_sourcetype] What I'm looking for is, I want a report of day-1, day-2, day-3, day-4.. if anyone of the day-4 count is more/less than 30% of previous days.. It should trigger the alert.
Does UF 7.2.8 is compatable with RHEL 8 ? Please let me know the minimum version of the UF agent that is compatible with RHEL8.