All Topics

Top

All Topics

Hi, we are using Splunk ES with notable events and suppressions. For sake of completeness, we have alerts that produce notable and some of these notable can be suppressed (through Splunk ES). So, in... See more...
Hi, we are using Splunk ES with notable events and suppressions. For sake of completeness, we have alerts that produce notable and some of these notable can be suppressed (through Splunk ES). So, in the "Incident Review" section we are able to see all the notables for which there are no suppressions. We are trying to send the same set (i.e. all the notables for which there are no suppressions). We tried to add the action "send to soar" in one of the alerts that produce notables but in this way we obtain that all the notables (even the one suppressed) arrive on the soar.  Do you know if there is a native feature (or quick way) to send all the notables for which there are no suppressions from Splunk to Splunk SOAR? Thank you in advance.
I'm totally and utterly new to splunk. Just ran the dockerhub sample, and followed the instructions: https://hub.docker.com/r/splunk/splunk/ I opened the search tab and most search commands seem to... See more...
I'm totally and utterly new to splunk. Just ran the dockerhub sample, and followed the instructions: https://hub.docker.com/r/splunk/splunk/ I opened the search tab and most search commands seem to work fine.  For example, the following command:     | from datamodel:"internal_server.server" | stats count      Returns a count of 33350. While this command:     | tstats count from datamodel:"internal_server.server"     as well as this one:     | tstats count     both return zero. How can I get tstats working in this docker env with the sample datasets?
I want to use free cloud trial, I have done everything but my access instance option is not enabling, what should I do, Pls refer below screenshot and help me. Thank you. @suyogpk_11 
Hi, while importing custom modules  (e.g. `from logger import Logger`) in the splunkd.log we are able to see  `ModuleNotFoundError: No module named 'logger'` error  and this is generated by this fi... See more...
Hi, while importing custom modules  (e.g. `from logger import Logger`) in the splunkd.log we are able to see  `ModuleNotFoundError: No module named 'logger'` error  and this is generated by this file "/opt/splunk/lib/python3.7/site-packages/splunk/persistconn/appserver.py" .We suspect somehow library is not able to identify internal modules and hence throwing error. We are also able to see warning `DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses` in same log file.  This issue seems to be on splunk version 9.0.x. For splunk version 8.2.x it is working fine. As we have noticed main difference in these 2 versions is python 2.7 support is removed from version 9.0.x. We will like to know the possible solution in solving this error.
Hi, I'm trying to create a table that contains a list of tasks. The list is static and stored in a lookup table called tasks.csv. So far I have the following search:     index=one OR index=... See more...
Hi, I'm trying to create a table that contains a list of tasks. The list is static and stored in a lookup table called tasks.csv. So far I have the following search:     index=one OR index=two | rex field=_raw "^[0-9-:.\s]*\w+\s+(?<task>.*)" | stats count by task | search [|inputlookup tasks.csv ]     This creates me a table that looks like this: task count task_a 1 task_b 1 task_c 1 task_d 1   However, if a task in my static list does not appear in the search results, it does not show in the table. I want the table to contain the whole list of tasks, regardless of whether they appear in the search results or not. i.e. task count task_a 1 task_b 1 task_c 1 task_d 1 task_e 0 task_f 0   Any ideas on how I can do this? The closest I've got is using a join.. which does work, but does not allow for a wildcard, meaning I'd need to specify the whole 'task'.     |inputlookup tasks.csv | join type=left task [ | search index=one OR index=two | rex field=_raw "^[0-9-:.\s]*\w+\s+(?<task>.*)" | stats count by task] | fillnull value=0 task | table task count       Would appreciate any thoughts or suggestions. Thanks in advance.
I have a Splunk universal forwarder installed. The Splunk Enterprise is seeing the forwarder, now I want to send network firewall logs to host forwarder to be sent to Enterprise platform.
I'm trying to produce an architecture diagram of our Splunk environment and I want to know what each of our universal forwarders and heavy forwarders are ingesting and sending. I'm looking in inputs ... See more...
I'm trying to produce an architecture diagram of our Splunk environment and I want to know what each of our universal forwarders and heavy forwarders are ingesting and sending. I'm looking in inputs and outputs.conf but they are of no use. Is there a way to view what each forwarder is ingesting and sending, whether that be via the command line or in Splunk itself?
Hello I'm using Splunk cloud, i have jenkins logs indexed to my system but for some reason breaks I took an output example and add it to Splunk with the "Add Data" option and there it looks ok bu... See more...
Hello I'm using Splunk cloud, i have jenkins logs indexed to my system but for some reason breaks I took an output example and add it to Splunk with the "Add Data" option and there it looks ok but when im searching for the sourcetype it is still broken. What is the best way to parse jenkins logs ? this is my sourcetype configuration :   [ console_logs ] CHARSET=UTF-8 LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true SHOULD_LINEMERGE=true category=Structured disabled=false pulldown_type=true   and i want it to be shown with the bulks :   <time> Started by user <time> Finished:    
Hello, I'm currently exploring the integration of Splunk with SAP Analytics Cloud for our data analysis and visualization needs. While I've found some documentation on the topic, I'm looking for pra... See more...
Hello, I'm currently exploring the integration of Splunk with SAP Analytics Cloud for our data analysis and visualization needs. While I've found some documentation on the topic, I'm looking for practical advice and insights from those who have successfully implemented this integration. Specifically, I'd like to know: What are the key considerations when setting up the integration between Splunk and SAP Analytics Cloud? Are there any best practices or recommendations for optimizing data transfer and visualization between the two platforms? How can I ensure that real-time data from Splunk is effectively utilized in SAP Analytics Cloud for timely decision-making? Are there any common challenges or pitfalls I should be aware of during this integration process, and how can I mitigate them? I have checked  https://community.splunk.com/t5/Community/ct-p/en-us/SAP Analytics Cloud Course for guidance.  If you have experience with this integration or can point me to valuable resources, I would greatly appreciate your insights. Thank you!
Dear All, I have a dashboard with Choropleth map presenting established connection from various countries. | index=*** sourcetype=***  bla bla  | bla bla bla | iplocation IP | table Time Usernam... See more...
Dear All, I have a dashboard with Choropleth map presenting established connection from various countries. | index=*** sourcetype=***  bla bla  | bla bla bla | iplocation IP | table Time Username IP Country | stats count by Country | eval count=case(count < 10, "1:Less than 10", (count > 10 AND count <= 20), "2:Between 10 to 20", (count > 20 AND count <= 50), "3:Between 21 to 50", (count > 51 AND count <= 100), "4:Between 51 to 100", (count > 100 AND count <= 500), "5:Between 101 to 500",(count > 500), "6:More than 500") | sort +count | geom geo_countries featureIdField=Country   In the legend, I see colours with the count of established connections. May I edit the legend in a way that the name of each country will be shown up and along with count? I have spent many days googling but unfortunately I am unable to get the answer. I also tried with Cluster Map visualization, but unfortunately, no luck for me. Thank you very much in advanced for your advices. | index=*** sourcetype=***  bla bla  | bla bla bla | rename ext_device as VPN, ext_when as Time, ext_Username as Username, ext_IP_addr as IP | iplocation IP | geostats count by Country
Hello I've encountered an issue in my Splunk environment that's been causing some headaches. When running a search, I receive the following error message: "Search Peer has the following message: 'Er... See more...
Hello I've encountered an issue in my Splunk environment that's been causing some headaches. When running a search, I receive the following error message: "Search Peer has the following message: 'Error in 'SearchParser': The search specifies a macro 'my_macro' that cannot be found.'" This error seems to be related to a missing macro called 'my_macro,' but I'm unsure why this is happening and how to resolve it. I've checked my search query, and it appears to be correct. Can anyone provide some guidance on what might be causing this error and how I can go about resolving it? Any help or insights would be greatly appreciated! Thank you.
Hi Team, I am trying to schedule a alert base on threshold for 2 time window. If fall to 0 events between 23:00 to 07:00 If fall to less then 20 events between   07:00 to 23:00    is it poss... See more...
Hi Team, I am trying to schedule a alert base on threshold for 2 time window. If fall to 0 events between 23:00 to 07:00 If fall to less then 20 events between   07:00 to 23:00    is it possible to define 2 threshold like above ? in one alert index=ABC sourcetype=XYZ login |stats count |where count =0 between23:00 to 07:00 index=ABC sourcetype=XYZ login |stats count |where count <=20 between 07:00 to 23:00  Please advise. Thank you
In outputs.conf you can configure compressed = <boolean> to compress the data, but the documentation doesn't specify how the compression is done。   There is also no parameter specifying the compres... See more...
In outputs.conf you can configure compressed = <boolean> to compress the data, but the documentation doesn't specify how the compression is done。   There is also no parameter specifying the compression method.   So my question is what compression is used by default, and whether there is any documentation on it to show that
We have a splunk enterprise installation where everything is on the same server/install (searchhead etc.). At the moment we have a script that shuts down the splunk services and then zip's the whole... See more...
We have a splunk enterprise installation where everything is on the same server/install (searchhead etc.). At the moment we have a script that shuts down the splunk services and then zip's the whole /opt/splunk/ folder and copies it to a NAS. Problem is that this action takes about 1,5h and during that time we won't be able to reach splunk (since the service is shutdown). Would it be possible to do this "on the fly" instead of shutting down the service and just do the zip of the entire folder when it is "alive".  My thinking is that this won't be optimal since then bucket files will be "open" etc.   But what are your take on this? maybe another better solution?  
[search] |stats count by ClientName Outcome example: Client1 Positive count Client1 Negative count Client2 Positive count Client2 Negative count Client 2 Unknown count How do I get t... See more...
[search] |stats count by ClientName Outcome example: Client1 Positive count Client1 Negative count Client2 Positive count Client2 Negative count Client 2 Unknown count How do I get the percentage for each client outcomes? Client1 Positive count % Client1 Negative count  % Client2 Positive count % Client2 Negative count % Client2 Unknown count %  
Hi all,   So here is the deal, I have to prepare some( a lot) db_outputs(using db_connect), however the corresponding tables are not yet existing. Colleagues responsible for that are on different t... See more...
Hi all,   So here is the deal, I have to prepare some( a lot) db_outputs(using db_connect), however the corresponding tables are not yet existing. Colleagues responsible for that are on different tasks. I would like to configure the exports in advance, so that once the tables are ready, the output would just flow. (and I might not be able to work on that later)  I did not manage to find a way using the GUI for that , as it always requires every step of the the way to be fulfilled, so even if I had data waiting for me, I would not be able to prep the field matching. So my idea is to configure them in db_outputs.conf, then a restart of the HF, should (or at least i think) be the solution. However, there is this.... customized_mappings = <string> # required # Specifies the output data name (fieldx) and database column number (1...n) mappings. # The expected format is: # field1:column1:type1,field2:column2:type2,…,fieldN:columnN:typeN And I do not know where to get the values for the types( I already know what field will be varchar, timestamp etc.... what I do not know is numeric representation of the field types). So it is a two fold question, 1) does anybody know this numeric to field type mapping (for example varchar=12, unassigned integer=4.... these I got from previous tables) ? 2) has anyone configured outputs in advance, before the corresponding table is even created, and does it start later automatically?  have fun! rd
Hi, I am importing a csv file in Splunk Enterprise that has semicolon as field separator but Splunk does not correctly parses it. For instance this field --> SARL "LE RELAIS DU GEVAUDAN";;;"1 is co... See more...
Hi, I am importing a csv file in Splunk Enterprise that has semicolon as field separator but Splunk does not correctly parses it. For instance this field --> SARL "LE RELAIS DU GEVAUDAN";;;"1 is considered as a whole and is not getting splitted. Do you know which settings should I configure in the file importer wizard in order to import it? Thank you Kind regards Marta  
Hello I recently completed a training module on the Splunk platform, and I encountered an issue with the completion status not updating correctly. Despite finishing all the required tasks... See more...
Hello I recently completed a training module on the Splunk platform, and I encountered an issue with the completion status not updating correctly. Despite finishing all the required tasks, it still shows as 'in progress' on my profile. I've double-checked that I completed all the module components, including the quizzes, and I even received a passing score. However, the module's status remains unchanged. I'm reaching out here for assistance because I'm not sure how to resolve this problem. Has anyone else experienced a similar issue, and if so, how did you go about fixing it? Any tips, insights, or guidance on resolving this module completion status issue would be greatly appreciated. Thank you in advance for your help!        
Hi Team, I am trying to monitor the .NET Windows service application and I have followed the instructions as per in the below link. https://docs.appdynamics.com/appd/23.x/latest/en/application-mo... See more...
Hi Team, I am trying to monitor the .NET Windows service application and I have followed the instructions as per in the below link. https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/install-app-server-agents/net-agent/install-the-net-agent-for-windows/configure-the-net-agent-for-windows-services-and-standalone-applications I am not a developer and don't have the source code (Namespace/class/functions) of my Windows service. So I couldn't add Custom Poco entry points. How do I discover my .NET functions from Windows Service?  Can anyone help me... Regards, Durai
I have a CSV file which has a some columns. There is one column named GROUP and the data in that column are in the format {'a':1,'b':2}, there can be any number of strings. There is another column VA... See more...
I have a CSV file which has a some columns. There is one column named GROUP and the data in that column are in the format {'a':1,'b':2}, there can be any number of strings. There is another column VALUE and the data are in the format {'a','b'}. I want to check if the strings in VALUE column are present in GROUP column and create a separate column named DATA with the strings not present. I am not sure how to achieve it in Splunk using commands. Can anyone have any suggestions? Example: Group                     Value              Data {'a':'1','b':'2'}            {'a','b'} {'a':1,'b':'2'}              {'a'}                {'b'} {'a':1,'b':'2','c':'3'}    {'a'}                {'b','c'} There are many columns like these and strings present in GROUP column can be more.