All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have a situation where I need to split my stats table. I have tried to use transpose and xyseries but not getting it. HAs someone had the same situation? Pls guide. Thanks!!! What I have: ... See more...
Hi, I have a situation where I need to split my stats table. I have tried to use transpose and xyseries but not getting it. HAs someone had the same situation? Pls guide. Thanks!!! What I have: What I need:
Hi all,  I have run in to a wall on a query I am attempting. I am receiving an error on my log, and one of the items is there for good reason but my system still picks it up. The error is one long s... See more...
Hi all,  I have run in to a wall on a query I am attempting. I am receiving an error on my log, and one of the items is there for good reason but my system still picks it up. The error is one long string (mostly) and I have used rex to extract both items as values and place them in a table, but they are gathered by message and I cannot get the system to acknowledge a not statement. Here is what I have: Basic search path | rex field=_raw max_match=0 "(?<Error>TCM-\d{5}).*?rowKey=(?<Value>\w*?),.*?]\)"| where not Value = CASHAMT | Table Error Value I have also tried != as well with little effect. I still get it in the table as one of three items for the days. if I do "CASHAMT" then it shows me no values (presumably because all the errors are in one message?)  I just do not want the error and value to show up on table when value = CASHAMT for that row. Any thoughts on this would be very useful, any further context I can provide as well
Hi Splunkers I have a CSV download with URL threat intel which is a flat file with URLs listed, I will import these into the Splunk Enteprise Security App however I need to add more columns to the f... See more...
Hi Splunkers I have a CSV download with URL threat intel which is a flat file with URLs listed, I will import these into the Splunk Enteprise Security App however I need to add more columns to the file to allow the import Currently the file is flat showing just URLs as below https://testurl.com https://testurl1.com https://testurl2.com https://testurl3.com I need to add the columns as below and have the URLs land in column 4 (url) as shown below description,http_referrer,http_user_agent,url,weight More of a *nix question but if anyone can assist with how I might edit this CSV to add those columns would be great. I download the file via a CRON job so I'll create another job to run the import script after the download Any help appreciated Thanks
Let me start by saying I know we should be using the coalesce command. I didn't write this query, it has been running fine for a year and it broke after we upgraded to 8.0.5.1. So just making sure I'... See more...
Let me start by saying I know we should be using the coalesce command. I didn't write this query, it has been running fine for a year and it broke after we upgraded to 8.0.5.1. So just making sure I'm not crazy. Sample CSV Host_File_1.csv abc.com,1.1.1.1 Host_File_2.csv xyz.com,2.2.2.2 Splunk 7.2.5.1.. | inputlookup Host_File_1.csv | inputlookup Host_File_2.csv append=true | rename host_file_1_name as hostname | rename host_file_2_name as hostname | table hostname, ip Output Hostname IP abc.com     1.1.1.1 xyz.com      2.2.2.2 Splunk 8.0.5.1 | inputlookup Host_File_1.csv | inputlookup Host_File_2.csv append=true | rename host_file_1_name as hostname | rename host_file_2_name as hostname | table hostname, ip Output Hostname IP xyz.com      2.2.2.2 abc.com in this case gets overwritten by xyz.com it seems.   Anyone know why this is happening?
Hi Splunk, It seems that sending log messages to Splunk HEC endpoints containing "\n", or "\r" or "\t" causes the Hec endpoint to respond... "{"text":"Invalid data format","code":6,"invalid-event-nu... See more...
Hi Splunk, It seems that sending log messages to Splunk HEC endpoints containing "\n", or "\r" or "\t" causes the Hec endpoint to respond... "{"text":"Invalid data format","code":6,"invalid-event-number":0}". Anyone knows where one can find  the list of characters that are not permitted ? Thank you, Eduardo
Good day splunkers,   Can anyone assist in letting me know if there is a way that i can use splunk to monitor my bamboo agents, whether or not they're down, building or online? 
Hello,   I am trying to use a lookup table to search against the URL field inside of the proxy logs. The use case is to find out if any users have been accessing any domains/URL's that is listed in... See more...
Hello,   I am trying to use a lookup table to search against the URL field inside of the proxy logs. The use case is to find out if any users have been accessing any domains/URL's that is listed in the lookup file. I am trying to strip away all the extra characters and keep the url/domain that it is matching on. This is what I have so far:   index=proxy sourcetype=proxy:syslog:proxy_web_policy[|inputlookup lookupfile_last_status  |return 1000 $lookup_domain_name]|rex field=proxy_url_field "(?<lookup_domain_name>(\w+\.)+\w+)"|table name url status Example: www.google.com/complete/search?client=chrome-omni&gs_ri=chrome-ext-ansg&xssi=t&q=gist.github.com/mcaj-admin/18558a1ec6a782d2452f971e806230c6&oit=3&url=https://gist.github.com/mcaj-admin/18558a1ec6a782d2452f971e806230c6&pgcl=4&gs_rn=42&psi=VqC6Zso_KQ3pgOGL&sugkey=AIzaSyBOti4mM-6x9WDnZIjIeyEU21OpBXqWBgw   is is matching on "gist.github.com/mcaj-admin/18558a1ec6a782d2452f971e806230c6" How can I strip away everything but what it is matching on? I am trying to figure out how to use regex + the lookup   Thanks.  
Hi, I'm building a simple dev CentOS VM on my PC to try out clustering configuration and other stuff. I've used tar -C to install the splunk tgz into different directories, set the web,mgmt,kv and ... See more...
Hi, I'm building a simple dev CentOS VM on my PC to try out clustering configuration and other stuff. I've used tar -C to install the splunk tgz into different directories, set the web,mgmt,kv and appserver to different ports but when doing ./splunk start/stop/restart it will only apply to the original splunk install. I've found this old link  but it reflects init.d changes and not the new systemd. appreciate any help configuring this to work.
I am looking for a way to create a query that will search and store license usage data per index. The idea is that I want to be able to view this visually in a dashboard (timechart). Currently, I use... See more...
I am looking for a way to create a query that will search and store license usage data per index. The idea is that I want to be able to view this visually in a dashboard (timechart). Currently, I use the following query: earliest=-30d@d latest=@d (index=_internal source=*license_usage.log* type="Usage") | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(((len(idx) == 0) OR isnull(idx)),"(UNKNOWN)",idx) | timechart span=1d eval(round((sum(b)/1024/1024/1024),3)) AS Volume by idx useother=f limit=0 | addtotals row=t col=f fieldname="Daily (GB)"   From my understanding, the internal index retention is 30 days. I do not want to change this, but be able to search back past 30 days for license data similar in format to the above mentioned query.    Any advice is appreciated, thanks!
I have installed the Cisco Cloudlock for Splunk app: https://splunkbase.splunk.com/app/3043/ And configured the API-token, URL, etc, as documented here: https://github.com/CiscoDevNet/cloud-secu... See more...
I have installed the Cisco Cloudlock for Splunk app: https://splunkbase.splunk.com/app/3043/ And configured the API-token, URL, etc, as documented here: https://github.com/CiscoDevNet/cloud-security/tree/master/Cloudlock/Splunk/Cisco%20Cloudlock%20Splunk%20App However, I'm not seeing any data. I don't see any outbound connections or calls to the API service via netstat And I don't see a corresponding log for the input in /opt/splunk/var/log/splunk How else can I monitor/check the status of the Data Input? Thanks! @yaronc 
In our environment, 1 FMC is already integrated.  Scenario : One add-on having GUI -> Ip & password and at the backend "client.pkcs12" file /$splunk home/etc/app/bin/encore.   Now , for second FMC ... See more...
In our environment, 1 FMC is already integrated.  Scenario : One add-on having GUI -> Ip & password and at the backend "client.pkcs12" file /$splunk home/etc/app/bin/encore.   Now , for second FMC I can add add-on on another forwarder, but at the GUI level, app get overwrite .  How can I add another FMC in same splunk environment.  Because, to validate the certificate we need password to be matched which we keep at GUI.  Please help here.
This is copy/pasted directly from the app description. "After the Splunk platform indexes the events, you can consume the data using the prebuilt dashboard panels included with the add-on." This ap... See more...
This is copy/pasted directly from the app description. "After the Splunk platform indexes the events, you can consume the data using the prebuilt dashboard panels included with the add-on." This app DOES NOT have a single visualization and doesn't even support UI viewing.  https://splunkbase.splunk.com/app/2848/#/overview Feel free to leave it a one star rating like I did. 
Hello Team, I have below event and I am trying to extract this number 29120120  as a field and tried with below search but no luck, can anyone help me?    source=system.log index=cassdb  ERROR AND... See more...
Hello Team, I have below event and I am trying to extract this number 29120120  as a field and tried with below search but no luck, can anyone help me?    source=system.log index=cassdb  ERROR AND "Failed to apply mutation locally" | rex field=_raw "Mutation\sof\s(?\d+)\s" ERROR [SharedPool-Worker-2] 2020-09-15 20:20:00,815 StorageProxy.java:1348 - Failed to apply mutation locally : {} java.lang.IllegalArgumentException: Mutation of 29120120 bytes is too large for the maximum size of 16777216 at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:256) ~[cassandra-all-3.0.13.1735.jar:3.0.13.1735] at org.apache.cassandra.db.Keyspace.applyInternal(Keyspace.java:596) ~[cassandra-all-3.0.13.1735.jar:3.0.13.1735] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:477) ~[cassandra-all-3.0.13.1735.jar:3.0.13.1735] at org.apache.cassandra.db.Mutation.apply(Mutation.java:210) ~[cassandra-all-3.0.13.1735.jar:3.0.13.1735] at org.apache.cassandra.db.Mutation.apply(Mutation.java:215) ~[cassandra-all-3.0.13.1735.jar:3.0.13.1735] at org.apache.cassandra.db.Mutation.apply(Mutation.java:224) ~[cassandra-all-3.0.13.1735.jar:3.0.13.1735] at org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1342) ~[cassandra-all-3.0.13.1735.jar:3.0.13.1735] at org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2514) [cassandra-all-3.0.13.1735.jar:3.0.13.1735] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_131] at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) [cassandra-all-3.0.13.1735.jar:3.0.13.1735] at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136) [cassandra-all-3.0.13.1735.jar:3.0.13.1735] at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [cassandra-all-3.0.13.1735.jar:3.0.13.1735] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Hello everyone , I have the below issues could you please help me on this  a) all the members in the active directory group are not showing up in the Splunk we have cross verified and they all ... See more...
Hello everyone , I have the below issues could you please help me on this  a) all the members in the active directory group are not showing up in the Splunk we have cross verified and they all are part of the group and we can see from backend .   b) some of the users are part of the group and role is already mapped  but showing acccess denied issue for rest of them its working fine    we have recently migrated our Splunk version to 7.3.5 is there some thing related to upgrade ????    
Hi Team, I'm having some issues using OpenStack Swift as s3 for SmartStore. I have the following configuration in my indexes.conf: [default] remotePath = volume:s3volume/$_index_name repFactor ... See more...
Hi Team, I'm having some issues using OpenStack Swift as s3 for SmartStore. I have the following configuration in my indexes.conf: [default] remotePath = volume:s3volume/$_index_name repFactor = auto[volume:s3volume] storageType = remote path = s3://test remote.s3.endpoint = https://objectstore-3.eu-nl-1.cloud.com remote.s3.access_key = 6cbb2########### remote.s3.secret_key = 20506########### remote.s3.auth_region = eu-nl-1 When pushing the config I'm getting this error message:   <Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>   But when using the following command I'm able to push the file into the s3 using the same configuration: /opt/splunk/bin/splunk cmd splunkd rfs -- putF /opt/splunk/bin/test.x.txt volume:s3volume I tried changing the signature version and am still having the same issue. Does anybody know how I can fix this or why is this happening? 
We inject the adrum script into our application pointing to the CDN version of "adrum-latest.js". This of course loads the latest version of the script from the CDN. We noted that as of version 20.8... See more...
We inject the adrum script into our application pointing to the CDN version of "adrum-latest.js". This of course loads the latest version of the script from the CDN. We noted that as of version 20.8.0.3230 (up to the latest: https://cdn.appdynamics.com/adrum/adrum-20.9.0.3268.js) we are getting the following "this.thenCore is not a function" error in all Chrome or Chromium based browsers (The new Edge, Brave, Vivaldi, Opera)... but Firefox, old Edge and Internet Explorer were fine (didn't check in Safari yet). As such we have two questions: 1.) What is a version number from about 2 weeks ago when things appeared to be working for us so that we can link to the older version to verify the issue (and the diff between the versions) 2.) Has anyone else encountered this issue? or know what in our code might trigger this issue?
I have the following data samples: Temperature=82.4, Location=xxx.165.152.17, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=84.2, Location=xxx.165.152.48, Time=Wed Sep 16 07:43:01 PDT 202... See more...
I have the following data samples: Temperature=82.4, Location=xxx.165.152.17, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=84.2, Location=xxx.165.152.48, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=82.4, Location=xxx.165.154.21, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=82.4, Location=xxx.165.162.22, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=77.0, Location=xxx.165.164.17, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=75.2, Location=xxx.165.170.17, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=77.0, Location=xxx.165.208.12, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=73.4, Location=xxx.165.224.20, Time=Wed Sep 16 07:43:01 PDT 2020, Type=UPS Temperature=75.3, Location=xxx.165.52.13, Time=Wed Sep 16 07:47:01 PDT 2020, Type=TempSensor Temperature=77.9, Location=xxx.165.52.14, Time=Wed Sep 16 07:47:01 PDT 2020, Type=TempSensor Temperature=76.3, Location=xxx.165.54.24, Time=Wed Sep 16 07:47:01 PDT 2020, Type=TempSensor Temperature=83.8, Location=xxx.165.48.20, Time=Wed Sep 16 07:47:01 PDT 2020, Type=TempSensor Temperature=73.8, Location=xxx.165.36.21, Time=Wed Sep 16 07:47:01 PDT 2020, Type=TempSensor I'd like to draw line graphs of the `Temperature` over `Time`, splitted by `Location` (for individual sensor), and I'd like to have a way to label the curves of `Locations`, by the value of `Type` (UPS or TempSensor), label, or legend, etc. I'd also like to be able to filter selectively showing by `Type's` value and/or by certain `Location`. So far, I figured out that I may be able to do the following: | xyseries Time, Location, Temperature but I am yet to figure out how to provide the labeling by `Type`, and filtering by `Type`, and `Location`.
Hi Everyone, I need some help here. I have a panel with a with a numeric value and it has a drilldown panel. when we click on that numeric value one drilldown panel is open and that is table. Now... See more...
Hi Everyone, I need some help here. I have a panel with a with a numeric value and it has a drilldown panel. when we click on that numeric value one drilldown panel is open and that is table. Now what I want to happen is that the user can close or hide the drilldown panel when it is not needed anymore. How is it possible? Main panel code <panel> <single> <title>TIME</title> <search> <query>index="abc" sourcetype=xyz Timeout $Org$ | bin span=1d _time |stats count by _time</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="colorBy">value</option> <option name="drilldown">all</option> <option name="height">100</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0,10,25,40]</option> <option name="trendDisplayMode">percent</option> <option name="unit"></option> <option name="rangeColors">["0xFF0000","0xFF0000","0xFF0000","0xFF0000","0xFF0000"]</option> <option name="useColors">1</option> <option name="showSparkline">1</option> <option name="trendDisplayMode">percent</option> <drilldown> <set token="show_panel3">true</set> <set token="selected_value3">$click.value$</set> </drilldown> </single> </panel>   Drilldown panel <panel depends="$show_panel3$"> <title>Statistics of Timeout Exceptions</title> <table> <title>Time Exceptions</title> <search> <query>index="ABC" sourcetype=XYZ Timeout |stats count by URL | sort -count </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="count">100</option> </table> </panel>
Is it possible with the Windows Add-on to pull the setup.evtx log?
Hi I am getting the below error while sending the login event from my website to Splunk. Server returned HTTP response code: 400 for URL: http://localhost:8088/services/collector. I have check vial... See more...
Hi I am getting the below error while sending the login event from my website to Splunk. Server returned HTTP response code: 400 for URL: http://localhost:8088/services/collector. I have check vial curl command curl -k "http://localhost:8088/services/collector" \ -H "Authorization: Splunk c14********************44f5" \ -d '{"event": "Hello, world!", "sourcetype": "manual"}'{"text":"Data channel is missing","code":10}curl: (6) Could not resolve host: \ curl: (6) Could not resolve host: \ curl: (6) Could not resolve host: Hello, world!, curl: (6) Could not resolve host: sourcetype curl: (3) [globbing] unmatched close brace/bracket in column 7 Please let me know what is missing I have also enabled HTTP Event Collector from global settings. I am using latest version of splunk Version:8.0.6