All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All, I am using Splunk Add-on for GCP to pull logs from log sink via pub/sub. I configured a pub/sub input inside the add on and it is successfully pulling the logs from pub/sub.  But I want... See more...
Hi All, I am using Splunk Add-on for GCP to pull logs from log sink via pub/sub. I configured a pub/sub input inside the add on and it is successfully pulling the logs from pub/sub.  But I want to confirm if  "GCP add on after receiving the messages from pub/sub sends back a ACK (acknowledgement) message to pub/sub so that same message is not sent twice or duplicated"? There is nothing mentioned about ACK messages in GCP addon documentation so asking here. Please help me out.  
Hello, I tried to input an DB with query as below:   SELECT ..., txn_stamp as TXTIME, .... FROM mybd WHERE txn_stamp > ? ORDER BY TXTIME ASC   When I hit Excecute query, the resul... See more...
Hello, I tried to input an DB with query as below:   SELECT ..., txn_stamp as TXTIME, .... FROM mybd WHERE txn_stamp > ? ORDER BY TXTIME ASC   When I hit Excecute query, the result produce error: ORA-01861: literal does not match format string. My txn_stamp is a time stamp column with the format: YYYY-mm-dd HH:MM:SS (ex: 2023-08-31 00:00:25). The curious thing is sometime it worked, Executing query show data, but it will stop at some point, I suspect it's because of the above error. My thinking is I want to formart either my db timestamp formart or the rising column timestamp formart to the same formart so it won't be a mischatch, but I don't know how.  
Yes, it is possible to add parameters to a Splunk URL to pre-populate the search query and make it more user-friendly. This can be helpful for sharing saved searches or dashboards with others so that... See more...
Yes, it is possible to add parameters to a Splunk URL to pre-populate the search query and make it more user-friendly. This can be helpful for sharing saved searches or dashboards with others so that they don't need to manually format the SPL search. To pre-populate a search query in a Splunk URL, you can use the search parameter. Here's the basic structure of a Splunk URL with a pre-populated search query: https://splunk_server:port/en-US/app/<APP_NAME>/search?q=<URL_ENCODED_SEARCH_QUERY> For example, if you want to pre-populate a search for "error messages," you can encode the query and create a URL like this: https://splunk_server:port/en-US/app/search/search?q=error%20messages When users click this URL, they will be taken to the Splunk search page with the "error messages" query already in the search bar. They can then execute the search or further refine it as needed. To create the <URL_ENCODED_SEARCH_QUERY> part of the Splunk URL, you need to URL-encode the actual SPL query you want to pre-populate in the URL. URL encoding is necessary to make sure that special characters or spaces in the query are correctly formatted for a URL. Here's an example: Let's say your SPL query is: index=myindex sourcetype=mylog "error messages" OR "warning messages" source="/var/log/app.log" To URL-encode this query, you would replace spaces with %20 and leave the rest of the query intact: index%3Dmyindex%20sourcetype%3Dmylog%20%22error%20messages%22%20OR%20%22warning%20messages%22%20source%3D%22%2Fvar%2Flog%2Fapp.log%22 So, your complete Splunk URL with the pre-populated URL-encoded search query would look like: https://splunk_server:port/en-US/app/search/search?q=index%3Dmyindex%20sourcetype%3Dmylog%20%22error%20messages%22%20OR%20%22warning%20messages%22%20source%3D%22%2Fvar%2Flog%2Fapp.log%22 You can use online URL-encoding tools (I am using CyberChef) to automatically encode your SPL query if it contains complex characters. Just paste your query into one of these tools, and it will generate the URL-encoded version for you.
@michael_vi You happen to get this list at all? I may be needing it in the near future so just trying to get a head of it.
It is a known issue related to SPL-235420. It is fixed in 9.1.0 https://docs.splunk.com/Documentation/Splunk/9.1.0/ReleaseNotes/Fixedissues#Charting.2C_reporting.2C_and_visualization_issues As a wo... See more...
It is a known issue related to SPL-235420. It is fixed in 9.1.0 https://docs.splunk.com/Documentation/Splunk/9.1.0/ReleaseNotes/Fixedissues#Charting.2C_reporting.2C_and_visualization_issues As a workaround, specify the app name directly in the dashboard definition.
Hi, I have the same problem here. Tried to set it to 60 days but it only shows 30 day data by using your spl.
I don't have ldap search set up, so I can't test - but give this a try: | makeresults | eval relativedate=strftime(relative_time(now(),"-2d@d"),"%Y%m%d%H%M%S.0Z") | map search="| ldapsearch search... See more...
I don't have ldap search set up, so I can't test - but give this a try: | makeresults | eval relativedate=strftime(relative_time(now(),"-2d@d"),"%Y%m%d%H%M%S.0Z") | map search="| ldapsearch search=\"(&(objectClass=user)(whenChanged>=$relativedate$)(!(objectClass=computer)))\" " | table cn whenChanged whenCreated
I get a min interval, if I choose a time below 3 hours ago, but the event I want to search for was 5 hours ago!
This does not work and I get no results. Any ideas what I'm doing wrong? What would the full search line be please?
Hello, I changed the title.  The CIDR match is used to see if a IP is within a subnet. I was trying to match the same IPv6, but with different format from index with my CSV table. In the example: ... See more...
Hello, I changed the title.  The CIDR match is used to see if a IP is within a subnet. I was trying to match the same IPv6, but with different format from index with my CSV table. In the example: Index has collapsed format of IPv6:  2001:db8:3333:4444:5555:6666::2101 CSV has expanded format of IPv6:    2001:db8:3333:4444:5555:6666:0:2101 The following lookup can NOT find the IPv6 that has the inconsistent pattern, it only find the exact match | index=vulnerability_index | lookup company.csv ip_address as ip OUTPUTNEW ip_address, company, location I think this is what I am looking for, I just don't know how to implement it https://splunkbase.splunk.com/app/4912 Thank you for your help
good afternoon everyone, i'm trying to change the sender when i configure a new SMTP asset, better said i want to change the sender domain when i configure the asset, however i have not been able to ... See more...
good afternoon everyone, i'm trying to change the sender when i configure a new SMTP asset, better said i want to change the sender domain when i configure the asset, however i have not been able to get it. The only domains i can use are splunkcloud.com and splunk.com. does anyone know how can i use other domain, without using user and password to authenticate?
Do you mean /app/search/search?q=search%20index%3D_internal%0A%7C%20stats%20count%20by%20component (formatted) as opposed to /app/search/search?q=search%20index%3D_internal%20%7C%20stats%20count%20by... See more...
Do you mean /app/search/search?q=search%20index%3D_internal%0A%7C%20stats%20count%20by%20component (formatted) as opposed to /app/search/search?q=search%20index%3D_internal%20%7C%20stats%20count%20by%20component (one line)?  You just need to make sure the original URI is formated.
Forget DBXquery.  Splunk's lookup can work with IPv6 CIDR.  You just need to build your lookup with CIDR.  See IPv6 CIDR match in Splunk Web (also Define a CSV lookup in Splunk Web).
As @gcusello points out you can do this with a subsearch/inputlookup to the outer search or you can do it with a lookup + where clause - try both and use the one that gives you the best performance ... See more...
As @gcusello points out you can do this with a subsearch/inputlookup to the outer search or you can do it with a lookup + where clause - try both and use the one that gives you the best performance index="data" sourcetype="entities" ``` This will lookup the EXTERNAL_EMAIL field in the data against the E_MAIL field in the CSV ``` | lookup 20230904_NeverLoggedIn.csv E_MAIL as EXTERNAL_EMAIL OUTPUT E_MAIL as Found ``` If the EXTERNAL_EMAIL is Found in this case, it will give you the result Changed to isnull(Found) to find users that do NOT exist in the CSV ``` | where isnotnull(Found) | table EMAIL EXTERNAL_EMAIL CATEGORY
Try this by combining the two lookups using append for the second lookup index=toto [ | inputlookup test.csv | inputlookup test2.csv append=t | eval user=Domain."\\".Sam | table user] | ta... See more...
Try this by combining the two lookups using append for the second lookup index=toto [ | inputlookup test.csv | inputlookup test2.csv append=t | eval user=Domain."\\".Sam | table user] | table _time user I believe there is a missing '.' in your eval statement setting up user  and 'Sam' is a field name?
Hello Everyone,    First off, thanks in advance to everyone who takes the time to contribute to this post!   I've got custom html code in simple xml and was able to grab data from a textpart ... See more...
Hello Everyone,    First off, thanks in advance to everyone who takes the time to contribute to this post!   I've got custom html code in simple xml and was able to grab data from a textpart and parse it into a  JavaScript variable captured using the code below. I'm trying to use the variable captured in the search query in the SearchManager function. So far I've only been able to set static values such as eval test = "Working" but have had no luck passing in a JavaScript variable.        require([ "underscore", "splunkjs/mvc/searchmanager", "splunkjs/mvc/simplexml/ready!", ], function(_, mvc, SearchManager) { var mysearch = new SearchManager({ id: "mysearch", autostart: "false", search: '| makeresults | eval test = captured | collect index = "test_index"' }); $("#btn-submit").on("click", function () { // Capture value of the Text Area var captured = $("textarea#outcome").val(); mysearch.startSearch(); }); }); });        
My example was XML for use in a classic dashboard - so if you take the entire XML below and create a new dashboard and paste in this into the source. <dashboard version="1.1"> <row> <panel> ... See more...
My example was XML for use in a classic dashboard - so if you take the entire XML below and create a new dashboard and paste in this into the source. <dashboard version="1.1"> <row> <panel> <table> <title>Turning the Time column red if outside hours 18:00 to 06:00</title> <search> <query> index="winlogs" host=* source="WinEventLog:Security" Eventcode=4624 Logon_Type=2 OR Logon_Type=7 NOT dest_nt_domain="Window Manager" NOT dest_nt_domain="Font Driver Host" | sort_time | convert ctime(_time) as timestamp | table timestamp,EventCode,Logon_Type,Account_Name,RecordNumber,status </query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="timestamp"> <colorPalette type="expression">if(tonumber(substr(value,12,2))&gt;=18 OR tonumber(substr(value,12,2))&lt;6, "#FF0000", "#FFFFFF")</colorPalette> </format> </table> </panel> </row> </dashboard> This is what an XML dashboard looks like. You can see your search in the <search> section and the <format> section is what defines your colours and testing the time range. That documentation for the format is here https://docs.splunk.com/Documentation/Splunk/9.1.0/Viz/TableFormatsXML  
Hello, How to perform lookup on inconsistent IPv6 format in CSV file from index? For example: Index has collapsed format of IPv6:  2001:db8:3333:4444:5555:6666::2101 CSV has expanded format of ... See more...
Hello, How to perform lookup on inconsistent IPv6 format in CSV file from index? For example: Index has collapsed format of IPv6:  2001:db8:3333:4444:5555:6666::2101 CSV has expanded format of IPv6:    2001:db8:3333:4444:5555:6666:0:2101 The following lookup can NOT find the IPv6 that has the inconsistent pattern, it only find the exact match | index=vulnerability_index | lookup company.csv ip_address as ip OUTPUTNEW ip_address, company, location In IPv6  "::" (double colon) represents consecutive zeroes  ( :0:   or :0:0: or :0:0:0:)  ":0:"represents 0000 I think this is what I am looking for, but I am not sure how to implement it. https://splunkbase.splunk.com/app/4912 Thank you for your help
Hello, we are working on setting some Health rules in AppDynamics to monitor slow-running queries in the Database. After going through the documentation on the website, we configured a health rule as... See more...
Hello, we are working on setting some Health rules in AppDynamics to monitor slow-running queries in the Database. After going through the documentation on the website, we configured a health rule as seen below. Our problem with this is that there is a 'Group Replication module' (screenshot below) always running on the db side that is needed but causing constant violations. Is there a way of adding an exception to queries so that similar items in the database do not trigger the violations? Is there another way you can suggest we move forward with this that will give us a more accurate result?
Hello All, I am using maps+ with some success. I have one question, is there a way to zoom back to a set zoom point (like 3 or 4) after a default zoom in on a cluster? I am using maps+ to show up or... See more...
Hello All, I am using maps+ with some success. I have one question, is there a way to zoom back to a set zoom point (like 3 or 4) after a default zoom in on a cluster? I am using maps+ to show up or down network devices. The cluster shows, say 2 devices at a given lat/log, zooms in quite a lot. I am using a map of the US, more or less centered in the window., but after the zoom-in, I have to backout using the "-" icon. It would be nice if maps+ had the zoom bck feature of the legacy map using geostats, etc.   Thanks eholz1