All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes, you are. white/blacklist has two options. 1. You explicitly list (dis)allowed event codes blacklist1=17,234,4762-4767 2. You specify key=regex to match (caveat - doesn't work with xml render... See more...
Yes, you are. white/blacklist has two options. 1. You explicitly list (dis)allowed event codes blacklist1=17,234,4762-4767 2. You specify key=regex to match (caveat - doesn't work with xml rendered events; in this case you need another setting) blacklist1 = EventCode=%47..% You tried to use the second option to do the first one.
Try setting it like this: [WinEventLog://Security] index = wineventlog sourcetype = WinEventLog:Security disabled = 0 whitelist = 0-6000 blacklist = 1,2,3,4
The /export endpoint will dispatch a search and then retrieve the results when the search is completed. If the search takes a lot of time, then likely the request will time out. You can either make y... See more...
The /export endpoint will dispatch a search and then retrieve the results when the search is completed. If the search takes a lot of time, then likely the request will time out. You can either make your search faster or you can use two endpoints, one where you dispatch the search and another endpoint where you later retrieve the results. To dispatch the search: curl -k -H 'Authorization: Splunk <your_token_here>' https://your_searchhead_here:8089/services/search/jobs -d search="search index=* | head 10 | table host" The above call will return you a search id (sid), which you'll need in the following call to retrieve the results: curl -k -H 'Authorization: Splunk <your_token_here>' https://your_searchhead_here:8089/services/search/<yoursidhere>/results Ref: https://docs.splunk.com/Documentation/Splunk/latest/RESTTUT/RESTsearches
Just now accepted the solution, I didn't see a notification that it was answered, sorry.
That works, I did a search for log4j.xml and found the file.  Thank you.
I am aware there is an accepted answer that helped me, but the connection was not established with the message 'need further details.' However, I need to do some extra configuration to establish the ... See more...
I am aware there is an accepted answer that helped me, but the connection was not established with the message 'need further details.' However, I need to do some extra configuration to establish the connection. I followed accepted answer for jdbc driver(downloaded snowflake-jdbc-3.15.1.jar ) and to configure db_connection_types.conf ( don't copy and paste it from accpeted answer. see below db_connection_types.conf Followed below link to fix the issue: could not initialize class net.snowflake.client.jdbc.internal.apache.arrow.memory.util.MemoryUtil https://community.snowflake.com/s/article/JDBC-OutOfDirectMemoryError I added the first two options from the above solution (you can see below in case the above link is broken or removed) to the JVM Options in connection settings with a space delimiter. It didn't work. However, the connection got established with the third solution. (See db_connections.conf also to understand how that parameter is used.)   1.) Increase the maximum heap size, which in return will increase the maximum direct memory size. You will need to refer to the application's documentation for instructions on how to configure this value because it is application-specific. If you were starting the application using the java command then any of the following JVM arguments will set the maximum heap size to 1 GB: -Xmx1048576k -Xmx1024m -Xmx1g 2.) Explicitly increase the maximum direct memory size. E.g., the following JVM sets the value to 1 GB: -XX:MaxDirectMemorySize=1g 3.) If for any reason you do not have any control over the amount of memory you can allocate to your JVM (e.g., you are limited by the size of the container you're running in and it cannot be configured) then change the query result set from ARROW to JSON. You can pass this setting as a connection parameter using your JDBC driver: JDBC_QUERY_RESULT_FORMAT=JSON   Final db_connections.conf and db_connection_types.conf   local/db_connection_types.conf [snowflake] displayName = Snowflake serviceClass = com.splunk.dbx2.DefaultDBX2JDBC jdbcDriverClass = net.snowflake.client.jdbc.SnowflakeDriver jdbcUrlFormat = jdbc‌‌//:/?db= ui_default_catalog = $database$ port = 443 local/db_connections.conf [Snowflake_DB] connection_properties = {"JDBC_QUERY_RESULT_FORMAT":"JSON"} connection_type = snowflake customizedJdbcUrl = jdbc‌‌//.snowflakecomputing.com:443/?user=&db=snowflake&warehouse=&schema=public database = snowflake disabled = 0 host = .snowflakecomputing.com identity = SnowflakeUser jdbcUseSSL = false localTimezoneConversionEnabled = false port = 443 readonly = false timezone = Etc/UTC      
I am aware there is an accepted answer that helped me, but the connection was not established with the message 'need further details.' However, I need to do some extra configuration to establish the ... See more...
I am aware there is an accepted answer that helped me, but the connection was not established with the message 'need further details.' However, I need to do some extra configuration to establish the connection. I followed accepted answer for jdbc driver(downloaded snowflake-jdbc-3.15.1.jar ) and to configure db_connection_types.conf ( don't copy and paste it from accpeted answer. see below db_connection_types.conf Followed below link to fix the issue: could not initialize class net.snowflake.client.jdbc.internal.apache.arrow.memory.util.MemoryUtil https://community.snowflake.com/s/article/JDBC-OutOfDirectMemoryError I added the first two options from the above solution (you can see below in case the above link is broken or removed) to the JVM Options in connection settings with a space delimiter. It didn't work. However, the connection got established with the third solution. (See db_connections.conf also to understand how that parameter is used.) 1.) Increase the maximum heap size, which in return will increase the maximum direct memory size. You will need to refer to the application's documentation for instructions on how to configure this value because it is application-specific. If you were starting the application using the java command then any of the following JVM arguments will set the maximum heap size to 1 GB: -Xmx1048576k -Xmx1024m -Xmx1g 2.) Explicitly increase the maximum direct memory size. E.g., the following JVM sets the value to 1 GB: -XX:MaxDirectMemorySize=1g 3.) If for any reason you do not have any control over the amount of memory you can allocate to your JVM (e.g., you are limited by the size of the container you're running in and it cannot be configured) then change the query result set from ARROW to JSON. You can pass this setting as a connection parameter using your JDBC driver: JDBC_QUERY_RESULT_FORMAT=JSON Final db_connections.conf and db_connection_types.conf local/db_connection_types.conf [snowflake] displayName = Snowflake serviceClass = com.splunk.dbx2.DefaultDBX2JDBC jdbcDriverClass = net.snowflake.client.jdbc.SnowflakeDriver jdbcUrlFormat = jdbc:snowflake://<host>:<port>/?db=<database> ui_default_catalog = $database$ port = 443 local/db_connections.conf [Snowflake_DB] connection_properties = {"JDBC_QUERY_RESULT_FORMAT":"JSON"} connection_type = snowflake customizedJdbcUrl = jdbc:snowflake://<host>.snowflakecomputing.com:443/?user=<user_name>&db=snowflake&warehouse=<warehouse_value>&schema=public database = snowflake disabled = 0 host = <host>.snowflakecomputing.com identity = SnowflakeUser jdbcUseSSL = false localTimezoneConversionEnabled = false port = 443 readonly = false timezone = Etc/UTC  
My inputs.conf from the deployment server (confirmed that it is being pushed to all hosts correctly): {WinEventLog://Security}   index = wineventlog sourcetype = WinEventLog:Security disa... See more...
My inputs.conf from the deployment server (confirmed that it is being pushed to all hosts correctly): {WinEventLog://Security}   index = wineventlog sourcetype = WinEventLog:Security disabled = 0 whitelist = EventCode="0-6000" blacklist = EventCode="1,2,3,4,"   Substituted other values for the blacklisted ones.  Despite being explicitly disallowed all host forwarders are still collecting and forwarding these events to the indexer.  Am I misconfiguring this?
I am aware there is an accepted answer that helped me, but the connection was not established with the message 'need further details.' However, I need to do some extra configuration to establish the ... See more...
I am aware there is an accepted answer that helped me, but the connection was not established with the message 'need further details.' However, I need to do some extra configuration to establish the connection. I followed accepted answer for jdbc driver (downloaded snowflake-jdbc-3.15.1.jar ) and to add db_connection_types.conf (don't copy and paste it from the accepted answer. see below db_connection_types.conf) However, I ran into a problem with error message : could not initialize class net.snowflake.client.jdbc.internal.apache.arrow.memory.util.MemoryUtil Followed below link to fix the issue :  https://community.snowflake.com/s/article/JDBC-OutOfDirectMemoryError I added the first two options from the above solution (you can see below in case the above link is broken or removed) to the JVM Options in connection settings with a space delimiter. It didn't work. However, the connection got established with the 3rd solution. (See db_connections.conf also to understand how that parameter is used.)   1.) Increase the maximum heap size, which in return will increase the maximum direct memory size. You will need to refer to the application's documentation for instructions on how to configure this value because it is application-specific. If you were starting the application using the java command then any of the following JVM arguments will set the maximum heap size to 1 GB: -Xmx1048576k -Xmx1024m -Xmx1g 2.) Explicitly increase the maximum direct memory size. E.g., the following JVM sets the value to 1 GB: -XX:MaxDirectMemorySize=1g 3.) If for any reason you do not have any control over the amount of memory you can allocate to your JVM (e.g., you are limited by the size of the container you're running in and it cannot be configured) then change the query result set from ARROW to JSON. You can pass this setting as a connection parameter using your JDBC driver: JDBC_QUERY_RESULT_FORMAT=JSON     Final confs are as below:      local/db_connection_types.conf [snowflake] displayName = Snowflake serviceClass = com.splunk.dbx2.DefaultDBX2JDBC jdbcDriverClass = net.snowflake.client.jdbc.SnowflakeDriver jdbcUrlFormat = jdbc‌‌//:/?db= ui_default_catalog = $database$ port = 443 local/db_connections.conf [Snowflake_DB] connection_properties = {"JDBC_QUERY_RESULT_FORMAT":"JSON"} connection_type = snowflake customizedJdbcUrl = jdbc‌‌//.snowflakecomputing.com:443/?user=&db=snowflake&warehouse=&schema=public database = snowflake disabled = 0 host = .snowflakecomputing.com identity = SnowflakeUser jdbcUseSSL = false localTimezoneConversionEnabled = false port = 443 readonly = false timezone = Etc/UTC      
The makeresults command is exactly what I needed. Thank you!
I am also interested by this question 
Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it. https://www.duanewaddle.com/proving-a-negative/ One way to produce the equivalent ... See more...
Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it. https://www.duanewaddle.com/proving-a-negative/ One way to produce the equivalent to that SQL is with makeresults. <<your search for env (dev,beta, or prod) and count>> | append [| makeresults format=csv data="env,count dev,0 beta,0 prod,0"] | stats sum(count) as count by env
Assuming my messages.csv has a single row with Messages field "My input search message" I dont see any double quotes added until these 3 lines | inputlookup messages.csv | fields Messages | renam... See more...
Assuming my messages.csv has a single row with Messages field "My input search message" I dont see any double quotes added until these 3 lines | inputlookup messages.csv | fields Messages | rename Messages as search I see My input search message   After adding 4th line | inputlookup messages.csv | fields Messages | rename Messages as search | format "(" "\"" "" "\"" "," ")" I see the following  ( " "My input search message" " )   After adding 5th line | inputlookup messages.csv | fields Messages | rename Messages as search | format "(" "\"" "" "\"" "," ")" | rex field=search mode=sed "s/ *\" */\"/g" I see the following result with two doublequotes (""My input search message"")  
What search are you using to try to find the data?
Anyone know how to accomplish the Splunk equivalent of the following SQL?   SELECT * FROM (SELECT 'dev' AS env, 0 as value UNION SELECT 'beta' as env, 0 as value UNION SELECT 'prod' as env, 0 as va... See more...
Anyone know how to accomplish the Splunk equivalent of the following SQL?   SELECT * FROM (SELECT 'dev' AS env, 0 as value UNION SELECT 'beta' as env, 0 as value UNION SELECT 'prod' as env, 0 as value)   I intend to combine this arbitrary, literal dataset with another query, but I want to ensure that there are rows for 'dev', 'beta', and 'prod' whether or not Splunk is able to find any records for these environments. The reason for this is, I'm trying to create an alert that will trigger if a particular metric is NOT published often enough in Splunk for each of these environments.
Hi @Pooja.Agrawal, It looks like the Community was not able to jump in and help. Did you find a solution or workaround in the meantime? If so, could you share your observations and learnings with t... See more...
Hi @Pooja.Agrawal, It looks like the Community was not able to jump in and help. Did you find a solution or workaround in the meantime? If so, could you share your observations and learnings with the community?  If not, you can reach out to AppDynamics Support: How do I submit a Support ticket? An FAQ 
Hi @Phillip.Montgomery, We're you able to try Marios' suggestions? If it worked, please click the 'Accept as Solution' button if not, reply back and continue the conversation. 
Hi All, I am unable to see the logs for the source even after seeing the file is being tailed and read in internal logs. Can you please guide as to what could be wrong here?   I can see in interna... See more...
Hi All, I am unable to see the logs for the source even after seeing the file is being tailed and read in internal logs. Can you please guide as to what could be wrong here?   I can see in internal logs: INFO Metrics - group=per_source_thruput, series="log_source_path",  kbps=0.056, eps=0.193, kb=1.730, ev=6, avg_age=0.000, max_age=0   But I dont see the logs in Splunk, the recent logs are there in file in the host, other sources are also coming into splunk fine.      
My guess of incorrect search results could be because of having spaces in my Message field in CSV my input lookup CSV Message filed has a string "My input search message"  I need to match all lines... See more...
My guess of incorrect search results could be because of having spaces in my Message field in CSV my input lookup CSV Message filed has a string "My input search message"  I need to match all lines that start with entire line between "My input search message"  and a given endswith Currently I guess it is individually looking for events "My"  "input" "search" "message" separately Can you please help how to match entire message in startswitb ?  
I see a column with name search and value (""field1"") Do we need to have field1 inside parentheses and two double quotes? Field label "search" in a subsearch is a pseudo keyword for "use as is... See more...
I see a column with name search and value (""field1"") Do we need to have field1 inside parentheses and two double quotes? Field label "search" in a subsearch is a pseudo keyword for "use as is literal" in a search command.  No, they should NOT have two quotation marks on each side.  Maybe your lookup values insert one additional set of double quotes?  If so, we can get rid of one set. Here is my emulation   | makeresults format=csv data="id,Messages ,a ,b ,c ,d" ``` the above emulates | inputlookup messages.csv ``` | fields Messages | rename Messages as search | format "(" "\"" "" "\"" "," ")" | rex field=search mode=sed "s/ *\" */\"/g"     Output only contains one set of double quotes search ("a","b","c","d")