All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hello all, I am trying to figure out why my iplocation report isnt providing the city,country under statistics. Below is my search that is providing the IP field in the table but the other two colu... See more...
hello all, I am trying to figure out why my iplocation report isnt providing the city,country under statistics. Below is my search that is providing the IP field in the table but the other two columns are blank. Any assistance would be great here.  index=wineventlog EventCode=4624 | search src_ip="*" ComputerName="*" user="*" | eval "Source IP" = coalesce(src_ip,"") | eval clientip=src_ip | iplocation allfields=false "Source IP" | table "Source IP", city, country
Hello, We have Splunk on Linux servers. Has anyone installed the Falcon Sensor from Crowdstike on their Linux servers that host Splunk? Crowdstrike is a next-gen antivirus solution.   Any iss... See more...
Hello, We have Splunk on Linux servers. Has anyone installed the Falcon Sensor from Crowdstike on their Linux servers that host Splunk? Crowdstrike is a next-gen antivirus solution.   Any issues or unforeseen consequences?   Thanks,    
Trying to use a token in a rex, but can't. I'm setting the token as follows (token_keywords_mv is a mv): <set token="token_rex">mvjoin(mvmap('token_keywords_mv',"(?&gt;".'token_keywords_mv'."&lt;".'... See more...
Trying to use a token in a rex, but can't. I'm setting the token as follows (token_keywords_mv is a mv): <set token="token_rex">mvjoin(mvmap('token_keywords_mv',"(?&gt;".'token_keywords_mv'."&lt;".'token_keywords_mv'."+?)"), "|")</set> When I use it in a rex command, it turns verbose: ... | rex field=_raw '(?i)$token_rex$'  It gives me the following error: Error in 'rex' command: Encountered the following error while compiling the regex ''(?i)mvjoin(mvmap('token_keywords_mv'': Regex: missing closing parenthesis.    When I set the token as the results of the eval, as in the following, it works: <set token="token_rex">(?&lt;lorem&gt;lorem+?)|(?&lt;ipsum&gt;ipsum+?)|(?&lt;situs&gt;situs+?)</set>
Which Add-on is best for Sophos NG Firewall Data out of the following on the Splunkbase for CIM mapping https://splunkbase.splunk.com/app/6187/ https://splunkbase.splunk.com/app/4543/ https:/... See more...
Which Add-on is best for Sophos NG Firewall Data out of the following on the Splunkbase for CIM mapping https://splunkbase.splunk.com/app/6187/ https://splunkbase.splunk.com/app/4543/ https://splunkbase.splunk.com/app/5378/  
Hello,  Thanks for taking the time to read/consider my question!  I'm working on reducing the overhead for Windows Event Logs that we are bringing in via UFs sitting on Windows workstations & se... See more...
Hello,  Thanks for taking the time to read/consider my question!  I'm working on reducing the overhead for Windows Event Logs that we are bringing in via UFs sitting on Windows workstations & servers by trimming some of the redundant text at the end of each log using a props.conf file located within /etc/system/local on each heavy forwarder.  My understanding was that if you placed a props.conf on the heavy forwarder it would effectively filter out the messages being sent to Splunk cloud, but I'm starting to think that props.conf isn't read until the indexing tier.  My question is this, if I need to keep indexAndForward=false on my heavy forwarders to avoid the licensing and overhead, how can I apply props.conf to filter events before Splunk cloud? Do I need to submit a support ticket for them to place the props.conf within the cloud-based indexers? Many thanks in advance  
Hi Splunkers, in our environment we use Splunk DB connect. When we configure a new connection, on differents DBs we are facing the following error:     The error is self explaining and i... See more...
Hi Splunkers, in our environment we use Splunk DB connect. When we configure a new connection, on differents DBs we are facing the following error:     The error is self explaining and it will be very clear for us in a normal condition, but what sounds strange is: 1. When we confgure the connection, we do NOT use encryption; we clearely disable encryption and flag the read only parameter 2. The error rise up with many DB of the same network, but not all. 3. It has been ensured by DB team that no encryption has been set on DB servers of that network. So, my wonder is: why we face a TLS error if we are telling to SPlunk to not use it? And how to solve it?
Hello When I try to install the cluster-agent-opeartor.yaml I get the error. Error from server (Forbidden): error when creating "cluster-agent-operator.yaml": roles.rbac.authorization.k8s.io is for... See more...
Hello When I try to install the cluster-agent-opeartor.yaml I get the error. Error from server (Forbidden): error when creating "cluster-agent-operator.yaml": roles.rbac.authorization.k8s.io is forbidden: User "j" cannot create resource "roles" in API group "rbac.authorization.k8s.io" in the namespace "appdynamics": requires one of ["container.roles.create"] permission(s). Thanks.
  I am not sure of how to set the BREAK_ONLY_BEFORE I have tried the below setting.. all my logs are of log4j format and starts at [2022-04-05 11:18:23,839] format BREAK_ONLY_BEFORE: date... See more...
  I am not sure of how to set the BREAK_ONLY_BEFORE I have tried the below setting.. all my logs are of log4j format and starts at [2022-04-05 11:18:23,839] format BREAK_ONLY_BEFORE: date My logs are  which are send to splunk through fluentd in as different events: [2022-04-05 11:18:23,839] WARN Error while loading: connectors-versions.properties (com.amadeus.scp.kafka.connect.utils.Version) java.lang.NullPointerException at java.util.Properties$LineReader.readLine(Properties.java:434) at java.util.Properties.load0(Properties.java:353) at java.util.Properties.load(Properties.java:341) at com.amadeus.scp.kafka.connect.utils.Version.<clinit>(Version.java:47) at com.amadeus.scp.kafka.connect.connectors.kafka.source.router.K2KRouterSourceConnector.version(K2KRouterSourceConnector.java:62) at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.versionFor(DelegatingClassLoader.java:380) at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.versionFor(DelegatingClassLoader.java:385) at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:355) at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:328) at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:261) at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:253) at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:222) at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:199) at org.apache.kafka.connect.runtime.isolation.Plugins.<init>(Plugins.java:60) at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:91) at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
I have 3 indexes that I need to join.   One index is the changes that we have in created in our Service Management tool. The second index is the Post Implementation Reviews (PIR's).  The third inde... See more...
I have 3 indexes that I need to join.   One index is the changes that we have in created in our Service Management tool. The second index is the Post Implementation Reviews (PIR's).  The third index links the two tables together. Change Table PIR Table Link Table The SourceId is the change RecId, TargetID is the PIR RecId.  What I would like to do is join the indexes so that I can show the change information and the status of the PIR.  What I have so far displays only the change information. index=index_prod_sql_ism_change * | rename Owner as SamAccountName | lookup AD_Lookup SamAccountName OUTPUT DisplayName, Department |dedup ChangeNumber | table ChangeNumber, Status, TypeOfChange, Priority, DisplayName, OwnerTeam, Category, ScheduledStartDate, ScheduledEndDate | sort ChangeNumber Can someone please assist. Thanks
Hi Everyone, I am getting big single event through a python script from an API containing the performance data from an API but it is not autoextracting all the KV fields and i need to get those det... See more...
Hi Everyone, I am getting big single event through a python script from an API containing the performance data from an API but it is not autoextracting all the KV fields and i need to get those details to get the meaningful data.Also the timestamp is coming in epoch format.Below is the event format :   {'d': {'__count': '0', 'results': [{'ID': '6085', 'Name': 'device1', 'DisplayName': None, 'DisplayDescription': None, 'cpumfs': {'results': [{'ID': '6117', 'Timestamp': '1649157300', 'DeviceItemID': '6085', 'pct_im_Utilization': '4.0'}, {'ID': '6117', 'Timestamp': '1649157600', 'DeviceItemID': '6085', 'pct_im_Utilization': '1.0'}, {'ID': '6117', 'Timestamp': '1649157900', 'DeviceItemID': '6085', 'pct_im_Utilization': '4.0'}, {'ID': '6117', 'Timestamp': '1649158200', 'DeviceItemID': '6085', 'pct_im_Utilization': '1.0'}, {'ID': '6117', 'Timestamp': '1649158500', 'DeviceItemID': '6085', 'pct_im_Utilization': '4.0'}, {'ID': '6117', 'Timestamp': '1649158800', 'DeviceItemID': '6085', 'pct_im_Utilization': '1.0'}, {'ID': '6117', 'Timestamp': '1649159100', 'DeviceItemID': '6085', 'pct_im_Utilization': '4.0'}, {'ID': '6117', 'Timestamp': '1649159400', 'DeviceItemID': '6085', 'pct_im_Utilization': '1.0'}, {'ID': '6117', 'Timestamp': '1649159700', 'DeviceItemID': '6085', 'pct_im_Utilization': '4.0'}, {'ID': '6117', 'Timestamp': '1649160000', 'DeviceItemID': '6085', 'pct_im_Utilization': '1.0'}, {'ID': '6117', 'Timestamp': '1649160300', 'DeviceItemID': '6085', 'pct_im_Utilization': '4.0'}, {'ID': '6117', 'Timestamp': '1649160600', 'DeviceItemID': '6085', 'pct_im_Utilization': '1.0'}]}, 'memorymfs': {'results': [{'ID': '6118', 'Timestamp': '1649157300', 'DeviceItemID': '6085', 'im_Free': '2.809298944E9', 'pct_im_Utilization': '83.0702196963489'}, {'ID': '6118', 'Timestamp': '1649157600', 'DeviceItemID': '6085', 'im_Free': '2.741796864E9', 'pct_im_Utilization': '83.4770099337781'}, {'ID': '6118', 'Timestamp': '1649157900', 'DeviceItemID': '6085', 'im_Free': '2.784014336E9', 'pct_im_Utilization': '83.2225932482694'}, {'ID': '6118', 'Timestamp': '1649158200', 'DeviceItemID': '6085', 'im_Free': '2.739892224E9', 'pct_im_Utilization': '83.4884879350163'}, {'ID': '6118', 'Timestamp': '1649158500', 'DeviceItemID': '6085', 'im_Free': '2.812264448E9', 'pct_im_Utilization': '83.0523485718404'}, {'ID': '6118', 'Timestamp': '1649158800', 'DeviceItemID': '6085', 'im_Free': '2.747793408E9', 'pct_im_Utilization': '83.4408727427832'}, {'ID': '6118', 'Timestamp': '1649159100', 'DeviceItemID': '6085', 'im_Free': '2.808725504E9', 'pct_im_Utilization': '83.0736754386571'}, {'ID': '6118', 'Timestamp': '1649159400', 'DeviceItemID': '6085', 'im_Free': '2.744528896E9', 'pct_im_Utilization': '83.4605457900666'}, {'ID': '6118', 'Timestamp': '1649159700', 'DeviceItemID': '6085', 'im_Free': '2.804084736E9', 'pct_im_Utilization': '83.1016422674804'}, {'ID': '6118', 'Timestamp': '1649160000', 'DeviceItemID': '6085', 'im_Free': '2.740002816E9', 'pct_im_Utilization': '83.4878214704282'}, {'ID': '6118', 'Timestamp': '1649160300', 'DeviceItemID': '6085', 'im_Free': '2.7926528E9', 'pct_im_Utilization': '83.1705349587829'}, {'ID': '6118', 'Timestamp': '1649160600', 'DeviceItemID': '6085', 'im_Free': '2.736328704E9', 'pct_im_Utilization': '83.5099629050747'}]}} In the above event , it is displaying CPU , memory utilization multiple times at different epoch times for each device.I have removed the trailing event containing data for other devices as it was exceeding the forum limit to post.I need to get the utilization data device wise.Please help on this.   Thanks
I am parsing logs using splunk and there are two types of logs : 1. API endpoint info and user ID 2. Logs which contains specific error that I am interested in.(Lets say error is ERROR_FAIL)   I ... See more...
I am parsing logs using splunk and there are two types of logs : 1. API endpoint info and user ID 2. Logs which contains specific error that I am interested in.(Lets say error is ERROR_FAIL)   I need all logs for a particular user hitting endpoint and getting ERROR_FAIL. Both the logs have same request id for one instance of api call. So firstly I want to filter the request ID from point 1, which will give me request id for the api and user I am interested in, and based on that request id ,I wana see all the logs that have failed because of error(ERROR_FAIL). Now If i use following query ,I get all the request ids for user and API: index=app-Prod sourcetype=prod-app-logs "api/rest/v1/entity" " 123" | table xrid   Now if I add this in sub-search. it does not work:Final query   index=app-Prod sourcetype=prod-app-logs  [search index=app-Prod sourcetype=prod-app-logs "api/rest/v1/entity" "123" | table xrid]  "ERROR_FAIL"  |  table xrid   This does not return anything. There are logs where 123 user hits "api/rest/v1/entity" and gets "ERROR_FAIL".How can i make my query correct?
I have a value that could be N/A or a number. The issue is when it is a number, splunk is not picking it up as one. So I have to run the "convert" command. But I need to check first if it is a N/A. ... See more...
I have a value that could be N/A or a number. The issue is when it is a number, splunk is not picking it up as one. So I have to run the "convert" command. But I need to check first if it is a N/A. Below is what I have but it does not work - any ideas?     | eval T_CpuPerc = if(T_CpuPerc="N/A",T_CpuPerc,convert num(T_CpuPerc) )      
Hi there, I've a scripted lookup that returns a field which contains text data. What is really intriguing is that if the returned data contains "metadata" in it, then the text is html encoded (part... See more...
Hi there, I've a scripted lookup that returns a field which contains text data. What is really intriguing is that if the returned data contains "metadata" in it, then the text is html encoded (partially at least), and not if this keyword 'metadata' is not present. Any logical explanation to that? How can I remove this html encoding? | stats count | eval curious = "jambon: de -> bayonne" | fields curious This will result in a single field containing "jambon: de -> bayonne" as expected. | stats count | eval curious = "metadata: de -> bayonne" | fields curious This will result in a single field containing "metadata: de -&gt; bayonne" which is not expected ; why is the ">" html encoded?! I thought it was related to the fact that "metadata" is also a Splunk command, but after a few tries with "search", "metasearch",  "mcollect", etc, none of those trigger this behaviour. Is this a weird bug? I'm on Splunk 8.2.3, can you guys reproduce and on which versions? Thanks,
Hi, I'd like to properly declare my indexes on the search head layer as suggested in the docs. All my indexes are declare through the indexer cluster manager node and are available. I could not fi... See more...
Hi, I'd like to properly declare my indexes on the search head layer as suggested in the docs. All my indexes are declare through the indexer cluster manager node and are available. I could not find the right page on docs.splunk.com or in the KB that explains how I'm supposed to declare my indexes on the search layer. Each index is declared in 2 files on the indexer cluster : - index stanza with volume name (distributed in bundle from manager node) - volume definition (identical on each indexer for keys encryption in system/local) I tried to copy the file with only indexes stanza on my search head and ran into a wall as the volume does not exists on the instance (which is true). Does the file needs to be emptied from some properties ? Or updated in some way ? Please point me to the right documentation page ? Of course I googled my question, and unfortunately couldn't find any satisfactory answer. Thanks ! Ema
I have the below log and I'm using the following regex to extract these fields "date", "process" ,"step", "user", "log level"  rex "^(?<Date>\d+-\d+-\d+\s+\d+:\d+:\d+)\s+\[[^\]]*\]\s+\[(?<Process>... See more...
I have the below log and I'm using the following regex to extract these fields "date", "process" ,"step", "user", "log level"  rex "^(?<Date>\d+-\d+-\d+\s+\d+:\d+:\d+)\s+\[[^\]]*\]\s+\[(?<Process>\[[^\]]+\][^\]]+)\]\s+\[(?<Step>[^\]]+)\]\s+\[(?<User>[^\]]+)\]\s+[^\[]+\[(?<Log_level>[^\]]+) When the log is like the first entry data is extracted without an issue, but once it's like the last three entries nothing is extracted, how can I solve this.  2021-09-28 10:20:27 [machine-run-76416-hit-644640-step-12470][Business Process Name][Business Process Step Name][Bot Users] MetadataStorage [ERROR] Boot failed 2022-04-04 23:30:16 [http-nio-127.0.0.1-7080-exec-3] [] [] [] DataBaseChecker [DEBUG] Checking MySQL ... 2022-04-04 23:30:16 [http-nio-127.0.0.1-7080-exec-3] [] [] [] DatabaseVersionChecker [INFO] Database is up to date. 2022-04-04 23:30:16 [http-nio-127.0.0.1-7080-exec-3] [] [] [] DataBaseChecker [DEBUG] Checking PostgreSQL ... 2022-04-04 23:30:16 [http-nio-127.0.0.1-7080-exec-3] [] [] [] OcrHealthChecker [DEBUG] Checking OCR ...
Is there a way to make Dashboard Studio multilingual? For Classic Dashboard, I can use messages.po to make it multilingual. However, this method does not seem to be available for Dashboard Studio.
how do I use a regular expression to match a pattern in a logfile - I am using LogFile extension for example from the following line specific to a "Response" msg {"remoteHost":"epdg","epoch":164808... See more...
how do I use a regular expression to match a pattern in a logfile - I am using LogFile extension for example from the following line specific to a "Response" msg {"remoteHost":"epdg","epoch":1648084954231,"command":"Response","Result-Code",{"value":1001}},"statusCode":"2001","status":"FOO ","timestamp":"2022-03-24 03:22:34.231"}   can I use a regular expression to find statusCode NOT 2001 indicating a failure - If so what regex should I use? cant seem to find documentation  
Hi, Is there any way to troubleshoot manual data collector configuration? For example, I configure the method invocation or SQL data collector, but I can not select the custom fields on a search. Wh... See more...
Hi, Is there any way to troubleshoot manual data collector configuration? For example, I configure the method invocation or SQL data collector, but I can not select the custom fields on a search. What is usually the reason for this? If I have configured the data collector incorrectly, how can I tell? Thanks
In our environment there are 2 HF's which are sending logs from different sources to splunk indexers and external tool Qradar. So my question is suppose we have searched for any windows events for a... See more...
In our environment there are 2 HF's which are sending logs from different sources to splunk indexers and external tool Qradar. So my question is suppose we have searched for any windows events for any specific timestamp, on search head and showing 20 events, so it is true that qradar will also received 20 events in same timestamp. I tried do the same seems there difference is number, so want to confirm , how it will be. If you can share any docs which says it will be same or not.
Hello, I'm using my Splunk.com username and password to log in.  I've also tried my email and password with no luck even after reseting my password.