All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am aware there is an accepted answer that helped me, but the connection was not established with the message 'need further details.' However, I need to do some extra configuration to establish the ... See more...
I am aware there is an accepted answer that helped me, but the connection was not established with the message 'need further details.' However, I need to do some extra configuration to establish the connection. I followed accepted answer for jdbc driver(downloaded snowflake-jdbc-3.15.1.jar ) and to configure db_connection_types.conf ( don't copy and paste it from accpeted answer. see below db_connection_types.conf Followed below link to fix the issue: could not initialize class net.snowflake.client.jdbc.internal.apache.arrow.memory.util.MemoryUtil https://community.snowflake.com/s/article/JDBC-OutOfDirectMemoryError I added the first two options from the above solution (you can see below in case the above link is broken or removed) to the JVM Options in connection settings with a space delimiter. It didn't work. However, the connection got established with the third solution. (See db_connections.conf also to understand how that parameter is used.) 1.) Increase the maximum heap size, which in return will increase the maximum direct memory size. You will need to refer to the application's documentation for instructions on how to configure this value because it is application-specific. If you were starting the application using the java command then any of the following JVM arguments will set the maximum heap size to 1 GB: -Xmx1048576k -Xmx1024m -Xmx1g 2.) Explicitly increase the maximum direct memory size. E.g., the following JVM sets the value to 1 GB: -XX:MaxDirectMemorySize=1g 3.) If for any reason you do not have any control over the amount of memory you can allocate to your JVM (e.g., you are limited by the size of the container you're running in and it cannot be configured) then change the query result set from ARROW to JSON. You can pass this setting as a connection parameter using your JDBC driver: JDBC_QUERY_RESULT_FORMAT=JSON Final db_connections.conf and db_connection_types.conf local/db_connection_types.conf [snowflake] displayName = Snowflake serviceClass = com.splunk.dbx2.DefaultDBX2JDBC jdbcDriverClass = net.snowflake.client.jdbc.SnowflakeDriver jdbcUrlFormat = jdbc:snowflake://<host>:<port>/?db=<database> ui_default_catalog = $database$ port = 443 local/db_connections.conf [Snowflake_DB] connection_properties = {"JDBC_QUERY_RESULT_FORMAT":"JSON"} connection_type = snowflake customizedJdbcUrl = jdbc:snowflake://<host>.snowflakecomputing.com:443/?user=<user_name>&db=snowflake&warehouse=<warehouse_value>&schema=public database = snowflake disabled = 0 host = <host>.snowflakecomputing.com identity = SnowflakeUser jdbcUseSSL = false localTimezoneConversionEnabled = false port = 443 readonly = false timezone = Etc/UTC  
My inputs.conf from the deployment server (confirmed that it is being pushed to all hosts correctly): {WinEventLog://Security}   index = wineventlog sourcetype = WinEventLog:Security disa... See more...
My inputs.conf from the deployment server (confirmed that it is being pushed to all hosts correctly): {WinEventLog://Security}   index = wineventlog sourcetype = WinEventLog:Security disabled = 0 whitelist = EventCode="0-6000" blacklist = EventCode="1,2,3,4,"   Substituted other values for the blacklisted ones.  Despite being explicitly disallowed all host forwarders are still collecting and forwarding these events to the indexer.  Am I misconfiguring this?
I am aware there is an accepted answer that helped me, but the connection was not established with the message 'need further details.' However, I need to do some extra configuration to establish the ... See more...
I am aware there is an accepted answer that helped me, but the connection was not established with the message 'need further details.' However, I need to do some extra configuration to establish the connection. I followed accepted answer for jdbc driver (downloaded snowflake-jdbc-3.15.1.jar ) and to add db_connection_types.conf (don't copy and paste it from the accepted answer. see below db_connection_types.conf) However, I ran into a problem with error message : could not initialize class net.snowflake.client.jdbc.internal.apache.arrow.memory.util.MemoryUtil Followed below link to fix the issue :  https://community.snowflake.com/s/article/JDBC-OutOfDirectMemoryError I added the first two options from the above solution (you can see below in case the above link is broken or removed) to the JVM Options in connection settings with a space delimiter. It didn't work. However, the connection got established with the 3rd solution. (See db_connections.conf also to understand how that parameter is used.)   1.) Increase the maximum heap size, which in return will increase the maximum direct memory size. You will need to refer to the application's documentation for instructions on how to configure this value because it is application-specific. If you were starting the application using the java command then any of the following JVM arguments will set the maximum heap size to 1 GB: -Xmx1048576k -Xmx1024m -Xmx1g 2.) Explicitly increase the maximum direct memory size. E.g., the following JVM sets the value to 1 GB: -XX:MaxDirectMemorySize=1g 3.) If for any reason you do not have any control over the amount of memory you can allocate to your JVM (e.g., you are limited by the size of the container you're running in and it cannot be configured) then change the query result set from ARROW to JSON. You can pass this setting as a connection parameter using your JDBC driver: JDBC_QUERY_RESULT_FORMAT=JSON     Final confs are as below:      local/db_connection_types.conf [snowflake] displayName = Snowflake serviceClass = com.splunk.dbx2.DefaultDBX2JDBC jdbcDriverClass = net.snowflake.client.jdbc.SnowflakeDriver jdbcUrlFormat = jdbc‌‌//:/?db= ui_default_catalog = $database$ port = 443 local/db_connections.conf [Snowflake_DB] connection_properties = {"JDBC_QUERY_RESULT_FORMAT":"JSON"} connection_type = snowflake customizedJdbcUrl = jdbc‌‌//.snowflakecomputing.com:443/?user=&db=snowflake&warehouse=&schema=public database = snowflake disabled = 0 host = .snowflakecomputing.com identity = SnowflakeUser jdbcUseSSL = false localTimezoneConversionEnabled = false port = 443 readonly = false timezone = Etc/UTC      
The makeresults command is exactly what I needed. Thank you!
I am also interested by this question 
Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it. https://www.duanewaddle.com/proving-a-negative/ One way to produce the equivalent ... See more...
Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it. https://www.duanewaddle.com/proving-a-negative/ One way to produce the equivalent to that SQL is with makeresults. <<your search for env (dev,beta, or prod) and count>> | append [| makeresults format=csv data="env,count dev,0 beta,0 prod,0"] | stats sum(count) as count by env
Assuming my messages.csv has a single row with Messages field "My input search message" I dont see any double quotes added until these 3 lines | inputlookup messages.csv | fields Messages | renam... See more...
Assuming my messages.csv has a single row with Messages field "My input search message" I dont see any double quotes added until these 3 lines | inputlookup messages.csv | fields Messages | rename Messages as search I see My input search message   After adding 4th line | inputlookup messages.csv | fields Messages | rename Messages as search | format "(" "\"" "" "\"" "," ")" I see the following  ( " "My input search message" " )   After adding 5th line | inputlookup messages.csv | fields Messages | rename Messages as search | format "(" "\"" "" "\"" "," ")" | rex field=search mode=sed "s/ *\" */\"/g" I see the following result with two doublequotes (""My input search message"")  
What search are you using to try to find the data?
Anyone know how to accomplish the Splunk equivalent of the following SQL?   SELECT * FROM (SELECT 'dev' AS env, 0 as value UNION SELECT 'beta' as env, 0 as value UNION SELECT 'prod' as env, 0 as va... See more...
Anyone know how to accomplish the Splunk equivalent of the following SQL?   SELECT * FROM (SELECT 'dev' AS env, 0 as value UNION SELECT 'beta' as env, 0 as value UNION SELECT 'prod' as env, 0 as value)   I intend to combine this arbitrary, literal dataset with another query, but I want to ensure that there are rows for 'dev', 'beta', and 'prod' whether or not Splunk is able to find any records for these environments. The reason for this is, I'm trying to create an alert that will trigger if a particular metric is NOT published often enough in Splunk for each of these environments.
Hi @Pooja.Agrawal, It looks like the Community was not able to jump in and help. Did you find a solution or workaround in the meantime? If so, could you share your observations and learnings with t... See more...
Hi @Pooja.Agrawal, It looks like the Community was not able to jump in and help. Did you find a solution or workaround in the meantime? If so, could you share your observations and learnings with the community?  If not, you can reach out to AppDynamics Support: How do I submit a Support ticket? An FAQ 
Hi @Phillip.Montgomery, We're you able to try Marios' suggestions? If it worked, please click the 'Accept as Solution' button if not, reply back and continue the conversation. 
Hi All, I am unable to see the logs for the source even after seeing the file is being tailed and read in internal logs. Can you please guide as to what could be wrong here?   I can see in interna... See more...
Hi All, I am unable to see the logs for the source even after seeing the file is being tailed and read in internal logs. Can you please guide as to what could be wrong here?   I can see in internal logs: INFO Metrics - group=per_source_thruput, series="log_source_path",  kbps=0.056, eps=0.193, kb=1.730, ev=6, avg_age=0.000, max_age=0   But I dont see the logs in Splunk, the recent logs are there in file in the host, other sources are also coming into splunk fine.      
My guess of incorrect search results could be because of having spaces in my Message field in CSV my input lookup CSV Message filed has a string "My input search message"  I need to match all lines... See more...
My guess of incorrect search results could be because of having spaces in my Message field in CSV my input lookup CSV Message filed has a string "My input search message"  I need to match all lines that start with entire line between "My input search message"  and a given endswith Currently I guess it is individually looking for events "My"  "input" "search" "message" separately Can you please help how to match entire message in startswitb ?  
I see a column with name search and value (""field1"") Do we need to have field1 inside parentheses and two double quotes? Field label "search" in a subsearch is a pseudo keyword for "use as is... See more...
I see a column with name search and value (""field1"") Do we need to have field1 inside parentheses and two double quotes? Field label "search" in a subsearch is a pseudo keyword for "use as is literal" in a search command.  No, they should NOT have two quotation marks on each side.  Maybe your lookup values insert one additional set of double quotes?  If so, we can get rid of one set. Here is my emulation   | makeresults format=csv data="id,Messages ,a ,b ,c ,d" ``` the above emulates | inputlookup messages.csv ``` | fields Messages | rename Messages as search | format "(" "\"" "" "\"" "," ")" | rex field=search mode=sed "s/ *\" */\"/g"     Output only contains one set of double quotes search ("a","b","c","d")
Hello,I have this type of data, and I'd like to extract the following fields with a rex command: Two words: Don't.  The data you show is clearly a fragment from a JSON object.  Do not treat stru... See more...
Hello,I have this type of data, and I'd like to extract the following fields with a rex command: Two words: Don't.  The data you show is clearly a fragment from a JSON object.  Do not treat structured data such as JSON as text because the developer can change format at any time without changing syntax and render your rex useless.  Splunk has robust, QA-tested commands like spath.  Follow @ITWhisperer's advice to share valid, raw JSON data. (Anonymize as needed.)  If your raw data is a mix of free text and JSON, show examples of how they are mixed so we can extract the valid JSON, then handle JSON in spath or fromjson (9.0+) Specific questions: I have a strong suspicion that your data illustration is not a faithful representation of raw data.  Because it contains lots of parentheses "(", ")", instead of curly brackets "{" and "}" as in compliant JSON. It is almost impossible for a developer to make this kind of mistake to mix parentheses and curly brackets randomly.  Can you verify and clarify? If your raw event is pure JSON, your highlighted snippets should have already been extracted by Splunk as multivalued data{}.from, data{}.to, data{}.intensity.forecast.  Do you not get those? Alternatively, is that illustrated data from a field that is already extracted (but misrepresented with mixed parentheses and curly brackets)? Lastly, in a common logging practice is to append JSON data at the end, following some other informational strings that do not contain opening curly bracket.  If this is the case, you can easily extract that JSON part with the following and handle it robustly with spath: | rex "^[^{]*(?<json_data>.+)" | spath input=json_data path=data{} | mvexpand data{} | spath input=data{} After this, your highlighted values would be in fields from, to, and intensity.forecast, respectively.
@bowesmana  Thank you for confirming.  That was how I understood this as well.  I was curious if there were options I wasn't aware of.     Thank you once again.
You are correct.
@gcusello , thank you so much, it’s working as expected. thank you once again
That's a great idea! I'll give it a shot.
Aren't you overcomplicating it a bit? Just render the date to a field | eval day=strftime(_time,"%F") and you're ready to go | stats min(_time) as earliest max(_time) as latest by day