All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am trying to filter the events using LOGIN keyword and drop remaining events. I am trying with the below configuration and it is not working. Any suggestions please? props.conf [test_sourcet... See more...
Hi, I am trying to filter the events using LOGIN keyword and drop remaining events. I am trying with the below configuration and it is not working. Any suggestions please? props.conf [test_sourcetype] TRANSFORMS-sample = test_authlog,setnull_test   transforms.conf [test_authlog] REGEX = (LOGIN) DEST_KEY = queue FORMAT = indexQueue   [setnull_test] REGEX = (?!LOGIN) DEST_KEY = queue FORMAT = nullQueue
Hi SMEs,   I am trying to write regex to parse/map CEF format fields as below. so that all corresponding fieldname can capture values, i am not able to capture values having spaces in between. Seek... See more...
Hi SMEs,   I am trying to write regex to parse/map CEF format fields as below. so that all corresponding fieldname can capture values, i am not able to capture values having spaces in between. Seeking suggestion. Attached snap shot for ref. regex101 c[n|s]\dlabel\=(\w+).*?c[n|s]\d\=([\.a-zA-Z0-9_-]+)   CEF:0|vendor|product|1.1|1234|PolicyAssetUpdated|1|cn1label=EventUserId cn1=-3 cs1label=EventUserDisplayName cs1=Automated System cs2label=EventUserDomainName cs2= cn2label=AssetId cn2=20888 cs3label=AssetName cs3=ABCDPQRS.domain.com cn3label=DirectoryId cn3=856 cs4label=DirectoryName cs4=Active Directory cs5label=DomainName cs5=domain.com
Hello,  I am creating a query for my proxy data. The idea is to show all categories that I want in multiple single value charts. And for any categories that return 0, they will still be represented ... See more...
Hello,  I am creating a query for my proxy data. The idea is to show all categories that I want in multiple single value charts. And for any categories that return 0, they will still be represented by a 0. my current query is  index="siem-cyber-proxy" action=blocked category=gambling OR category=malware  | eval isEvent=if(searchmatch("category"),1,0) | stats count as myCount sum(isEvent) AS isEvent | eval result=if(isEvent>0, isEvent, myCount) | table result   This current query adds results from both categories together rather than split into individual charts. I need to find out how to split the results so it creates multiple charts. Or do i need to run the query for each individual category? Hopefully this makes sense. Thank you
Have upgraded a few 100 FWs (UF & HF) to Splunk 8.2.3. Looking for a bench marking checks to make sure they are fully functional. We have a large clustered (Indexers / SHs), with ES. Any SPLs for ben... See more...
Have upgraded a few 100 FWs (UF & HF) to Splunk 8.2.3. Looking for a bench marking checks to make sure they are fully functional. We have a large clustered (Indexers / SHs), with ES. Any SPLs for bench marking instances upgraded, on ES & FWs are appreciated. Thank u & stay safe. 
Hi I am getting  KV Store process terminated abnormally (exit code 14, status exited with code 14). See mongod.log and splunkd.log for details.   I have stopped splunk and moved mongod folder and... See more...
Hi I am getting  KV Store process terminated abnormally (exit code 14, status exited with code 14). See mongod.log and splunkd.log for details.   I have stopped splunk and moved mongod folder and started it again  I am getting now  2021-12-01T13:55:55.528Z W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future release. 2021-12-01T13:55:55.545Z F NETWORK [main] The provided SSL certificate is expired or not yet valid. 2021-12-01T13:55:55.545Z F - [main] Fatal Assertion 28652 at src/mongo/util/net/ssl_manager.cpp 1120 2021-12-01T13:55:55.545Z F - [main] ***aborting after fassert() failure and I want to regenerate server.pem   just to confirm this is the right command  $SPLUNK_HOME/bin/splunk createssl what are the risks   ?  
Dear Splunk Community, I have the following code: <dashboard> <label></label> <row> <panel> <single> <search> <query>host="DESKTOP-L4ID3T2" source="BatchProcessor*"... See more...
Dear Splunk Community, I have the following code: <dashboard> <label></label> <row> <panel> <single> <search> <query>host="DESKTOP-L4ID3T2" source="BatchProcessor*" inventoryimport* "ExitCode: 0" | stats count | eval msg=case(count == 0, "Scan niet succesvol!", count &gt; 0, "Scan succesvol!") | eval range=case(count == 0, "severe", count &gt; 0, "low") | table msg</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="field">range</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> </dashboard>   If count returns 0 events, I expect the color of the single value field to be red (severe), otherwise it should be green (low). But using the above code, there is no color at all (besides the default color black and white).  Why is the above not working? Thanks in advance.
I'm new to this! Our custom app stopped working on Splunk Cloud and was told by support to change an xml file below from app.html to search.html because of a Cloud change. However, when I go to reupl... See more...
I'm new to this! Our custom app stopped working on Splunk Cloud and was told by support to change an xml file below from app.html to search.html because of a Cloud change. However, when I go to reupload the same app, I get the error that app validation failed: App does not support search head cluster environments. It passes all the vetting just gives that error at the end. What could it possibly be looking for? Again, new to this so can't find the answer in google..   <?xml version="1.0"?> <view template="pages/search.html" type="html" isDashboard="False"> <label>Search</label> </view>  
The certificate configuration tutorials have unfortunately left me with some lingering questions.  Premise: They have taught me that in order to set up a 3rd-party-signed certificate for a Splunk E... See more...
The certificate configuration tutorials have unfortunately left me with some lingering questions.  Premise: They have taught me that in order to set up a 3rd-party-signed certificate for a Splunk Enterprise server, I must: 1.create privatekey 2.create CSR, using the aforementioned private key 3.sent CSR to the CA authority of the current company 4.receive a multitude of certificates: a server cert, a CA root cert, and perhaps CA intermediate certs. 5.I can choose to combine the CAroot and CAintermediate certs to create a CAbundle.pem which i can reference to in any CAcert fields. (example: sslRootCaPath field in server.conf ) 6. I need to combine the server cert, private key, and CAbundle to create a complete Splunk Enterprise signed certificate. (to be used by fields like for example inputs.conf:serverCert, or outputs.conf:sslCertPath ) So far so good. This procedure allows me to set up SSL connections between Splunk Enterprise instances. I have two scenarios where this setup probably do not work, and I would like to know how I cán make them work:  1) I want to deploy 100 forwarders remotely and set them so that they send their data to an indexer or heavy forwarder through SSL. Problem: The process of getting a 3rd party signed certificate for each and every forwarder is arduous and I don't believe it can be done remotely effectively.  My thoughts: Can I use (part of) the certification of the data receiver (IDX/HF)  as a public key which I can then send to all forwarders? Clearly I can not use the concatenated certificate described in premise_step6, because it contains a private key.  Could I maybe use the signed servercert part that I received from the 3rd party, pre-concatenation ?  A splunk data receiver does not necessarily have to validate the certification of a date sender, so I don't see why each universal forwarder should be equiped with its own certificate. There has to be a way to have only them check whether the indexer has valid certification somehow. 2) Say I want to connect another application (like the Infoblox Splunk Connector) to a Splunk data receiver while using SSL. My thoughts: I expect that sending the CAbundle (premise_step5) should be enough, so that the application side can create its own certificate and perhaps combine it with the CAroot somehow.. but I guess my question is the same as before; I cannot send the concatenated .pem from premise_step6. What is the best way to set up an SSL connection to another application?  Thanks in advance.
I have 3 servers (2 of them have 4x600GB hdd and one has 6x600GBHDD and 2x800GB SSD). I want to build small splunk architecture 10GB/day with : - 3 Indexers in cluster - 1 deployment, licensing , ... See more...
I have 3 servers (2 of them have 4x600GB hdd and one has 6x600GBHDD and 2x800GB SSD). I want to build small splunk architecture 10GB/day with : - 3 Indexers in cluster - 1 deployment, licensing , master cluster indexer - 1 searchead What is my best option to build this architecture? I was thinking to make server cluster using Proxmox and then deploy each of those machines in virtual environment, but I do not have NAS as different device, and to get best availability from server clustering I need to make those VMs as light as possible.
Hello, I recently messed up the permissions for the only account in my testing environment instance. I no longer have access to search my existing indexes and I cannot seem to re-grant admin level pr... See more...
Hello, I recently messed up the permissions for the only account in my testing environment instance. I no longer have access to search my existing indexes and I cannot seem to re-grant admin level privileges to my account as I do not have the privileges to do so. I have tried to make another account but of course I am unable to give that account the permissions that I need. If there is anyway that I can restore my access please let me know.
Hi, I want to monitor a whole bunch of Universal Forwarders that i have set up and configured. All data from these are all forwarded to a heavy forwarder that forwards everything to Splunk Cloud. ... See more...
Hi, I want to monitor a whole bunch of Universal Forwarders that i have set up and configured. All data from these are all forwarded to a heavy forwarder that forwards everything to Splunk Cloud. My problem is that i have only access to one index in the cloud, but not the _internal index that receives UF metrics. Is it possible to change the index from _internal to the one I have access to in the UF config?
Hi fellow Alert Manager Users,   is it possible to create alert manager incidents from SPL instead of from the custom alert action? This would allow for easier testing, being able to manually creat... See more...
Hi fellow Alert Manager Users,   is it possible to create alert manager incidents from SPL instead of from the custom alert action? This would allow for easier testing, being able to manually create incidents, not needing to schedule a search. I tried the following, not diving into the code so far, but this did not work out:   | makeresults count=1 | eval index="alerts", auto_assign_owner="unassigned", impact="low", urgency="low", title="testTitle", owner="unassigned", append_incident=1, auto_previous_resolve=0, auto_subsequent_resolve=0, auto_suppress_resolve=0, auto_ttl_resove=0, display_fields="test", category="testCategory", subcategory="testSubcategory", tags="testTag", notification_scheme="testScheme" | sendalert alert_manager param.index="alerts" param.auto_assign_owner="unassigned" param.impact="low" param.urgency="low" param.title="testTitle" param.owner="unassigned" param.append_incident=1 param.auto_previous_resolve=0 param.auto_subsequent_resolve=0 param.auto_suppress_resolve=0 param.auto_ttl_resove=0 param.display_fields="test" param.category="testCategory" param.subcategory="testSubcategory" param.tags="testTag" param.notification_scheme="testScheme"   It fails with the error:   File "/opt/splunk/etc/apps/alert_manager/bin/alert_manager.py", line 570, in <module> log.debug("Parsed savedsearch settings: expiry={} digest_mode={}".format(savedSearch['content']['alert.expires'], savedSearch['content']['alert.digest_mode'])) KeyError: 'content'     Thanks!
Hello, I am posting here to know if anyone of you have an idea about the queries i have to search in order to save them and create a single dashboard to monitor my forwarders. I need queries to :  ... See more...
Hello, I am posting here to know if anyone of you have an idea about the queries i have to search in order to save them and create a single dashboard to monitor my forwarders. I need queries to :  -  show the maximum CPU usage (in percent) per machine monitored, and the maximum CPU usage (in percent) of all these machines - another one exactly as the previous one, but for the average CPU usage (in percentage) -A third one with the same concept, but for RAM instead of CPU (always in percentage) - Same thing, with disk usage (in percentage) -2 other ones for the inbound and outbound network trafic (in percentage with unit : 1Gbps)   The data are collected from the monitored machines via the plugin 'Splunk add-on for Unix and Linux', and stored in an index called Linux Thank you !
Hi fellow Alert Manager Users, what is a good way to clear out the alert manager incidents, to restart fresh? I am creating new tickets and still testing their content. After having finalized the ti... See more...
Hi fellow Alert Manager Users, what is a good way to clear out the alert manager incidents, to restart fresh? I am creating new tickets and still testing their content. After having finalized the ticket content, I would like to start out fresh again. Seeing that alert manager keeps incidents, their details and change history separately, also in index, kv and other places, they question is what should be cleared to get a clean new start. Thanks!
I know this topic has been discussed many times in this thread, but I have not found a case like mine so far. The index changes the day by the month and the month by the day from the 1st of each mon... See more...
I know this topic has been discussed many times in this thread, but I have not found a case like mine so far. The index changes the day by the month and the month by the day from the 1st of each month until %d=%m. From 12/12 (for example) the data will be stored correctly in December. The data I have in the log looks like this:     01/12/2021 12:10:04, ......     And the configuration I have in props.conf is as follows:     [source:://not/able/to/show/real/path/license_*.txt] TIME_FORMAT=%d/%m/%Y %H:%M:%S TIMESTAMP_FIELDS=Date     I have tried to analyze which props are taken into account with the command     splunk cmd btool props list --debug     The properties seem to be taken into account. In my case a TIME_PREFIX is not applicable either because there are no spaces or symbols at the beginning, I have tried everything. Any suggestions? I ran out of ideas
I have sourcetype A that has info about service_accounts such as name, AU, email , full_name, manager_name. But some of the events in source A, do not contain the field  email , manager_name, full_n... See more...
I have sourcetype A that has info about service_accounts such as name, AU, email , full_name, manager_name. But some of the events in source A, do not contain the field  email , manager_name, full_name field. In those cases I have to look into another index and sourcetype, say B to fetch those data. AU is the common field name in both . Can we join the data, without having to use 'join' for performance issue ?  
I have data coming in where I have a field called Result which holds data as below 1) "FAIL" 2) " FAIL " 3) "PASS" 4) " PASS " now i have created a dashboard where the Result field is used in Dr... See more...
I have data coming in where I have a field called Result which holds data as below 1) "FAIL" 2) " FAIL " 3) "PASS" 4) " PASS " now i have created a dashboard where the Result field is used in Drop down box. I have cleared the extra space from the field using | rex mode=sed field=Result "s/ //g" | I also have a table showing data of values of PASS and FAIL count where i have also cleared the space using rex command and created fields of Results using  | stats count(eval(searchmatch("PASS"))) AS PASS count(eval(searchmatch("FAIL"))) AS FAIL | Now when i filter "FAIL" or "PASS" in drop down and submit, The table on dashboard does not show count for values having space (i.e. " PASS " and " FAIL ") and shows count for only without space values. how can i solve this.  
I have data coming in where I have a field called Result which holds data as below 1) "FAIL" 2) " FAIL " 3) "PASS" 4) " PASS " now i have created a dashboard where the Result field is used in Dr... See more...
I have data coming in where I have a field called Result which holds data as below 1) "FAIL" 2) " FAIL " 3) "PASS" 4) " PASS " now i have created a dashboard where the Result field is used in Drop down box. I have cleared the extra space from the field using | rex mode = sed field=Result "s/ //g"|  in dropdown values. I have a data also showing on dashboard using the count as  stats count(eval(searchmatch("PASS"))) AS PASS count(eval(searchmatch("FAIL"))) AS FAIL which also have cleared the space using  | rex mode =  sed field=Result "s/ //g"| but when I select "PASS" or "FAIL" in drop down and submit the data on dashboard, it excludes the data which has values with space in it (i.e. " FAIL " and " PASS ") and shows only the values without space. How can I solve this.  
I've been trying to create a Cluster Map to track the location of a vehicle into my dashboard. Although I can see the location of the vehicle in a cluster map by using search, I am not able to create... See more...
I've been trying to create a Cluster Map to track the location of a vehicle into my dashboard. Although I can see the location of the vehicle in a cluster map by using search, I am not able to create a cluster map from dashboard as I cannot see a feature for creating a cluster map. I can only see "world map", "US map", "choropleth svg"  features and I copied my command  that I used in the search part to track the location  and pasted into spl code sections of all these map features presented by dashboard studio but it did not work. Could you please guide me on creating a cluster map in the dashboard studio? If not, could you please provide me an other way in deep detailed to track the location ? I appreciated for your time.
Hi, I'm collecting syslog events from network to a dedicated universal forwarder using a TCP input on forwarder.  In my Splunk installation I get all the syslog entries, but there's a number in ang... See more...
Hi, I'm collecting syslog events from network to a dedicated universal forwarder using a TCP input on forwarder.  In my Splunk installation I get all the syslog entries, but there's a number in angled brackets (<149>, for example) added to the beginning of every log entry added to Splunk index. That number is not always <149>, it changes, but I cannot find the logic behind those changes. That angled bracketed number does not allow to implement correct field extraction. So my question is: how do I get rid of that number in angled brackets? Shall it be done on forwarder?  I'm sorry if my question is stupid, or is well-covered in documentation, I'm relatively new to Splunk and learning now.   Thank you!