All Topics

Top

All Topics

I have a dashboard that reports CDN Cache hit ratios. I have set this up as a static dashboard to email out to a group of people a performance score for their use. This used to work several months ag... See more...
I have a dashboard that reports CDN Cache hit ratios. I have set this up as a static dashboard to email out to a group of people a performance score for their use. This used to work several months ago but recently my PDFs are showing the following error on all of my result queries:  Invalid earliest_time I reviewed my source code and set the token fields for earliest and latest times as follows.  <set token="field1.earliest">-7d@d</set> <set token="field1.latest">now</set> Variables are then referred to in the searches performed.    What would be causing this return error on my scheduled reports?   
We have a Search Head clustered and Indexer Clustered env. we have a deployers which is not a SH or and Indexer just a linux machine on the same network  I have put the apps that I want in my SH ... See more...
We have a Search Head clustered and Indexer Clustered env. we have a deployers which is not a SH or and Indexer just a linux machine on the same network  I have put the apps that I want in my SH into /opt/splunk/etc/shcluster/apps and did the command suggested but they dont show up under apps.  why is this? lets say you have apps in your search heads but not in your deployers under shcluster/apps if you run the update command the way it is describes seems like it should get rid of those apps in your search head but it doesn't. Why? if we are putting apps in shcluster/apps in their .tgz format. How does that make them visible in the SH env?   PS if there is anyone out that that had setup and deployed in a Dod type env that has some lessons learned please let me know also would like to get some other insights.
I am in a environment and I am able to get data in from a general perspective. We have a index clustered and search head clustered test  env  I can search *  and get data in andjust deal with that. w... See more...
I am in a environment and I am able to get data in from a general perspective. We have a index clustered and search head clustered test  env  I can search *  and get data in andjust deal with that. we have the CIM vladiator app and we get  errors such as the following  So then I go and hunt the splunkd.log files of said location but really cant make heads or tails of whats important to solve any issues I may have. attached are  the splukd.log from sh01 and indx03,indx03 and indx04 respectively.     Should I care about info warning and should I worry about warnings or should I focus on errors? Keep in mind I have tried to search Some of these errors but they answers are amiguitous or not relevant or don't work.  Is there a strategy that people use to go about this ? is there anything that is seen on here that stands out?
Needs to blacklist certain syslogs messages from the forwarder level. We have raw syslogs as below: 2023-03-27T00:00:00+00:00 10.10.33.15 Mar 27 2023 00:00:00.028 UTC : %UC_Test-4-DeviceTransientC... See more...
Needs to blacklist certain syslogs messages from the forwarder level. We have raw syslogs as below: 2023-03-27T00:00:00+00:00 10.10.33.15 Mar 27 2023 00:00:00.028 UTC : %UC_Test-4-DeviceTransientConnection: %[ConnectingPort=2000][DeviceName=AN004A1328478011][IPAddress=10.152.157.107][DeviceType=30027][Reason=3][Protocol=SCCP][IPAddrAttributes=2][UNKNOWN_PARAMNAME:LastSignalReceived=StationRegister][UNKNOWN_PARAMNAME:StationState=wait_register][AppID=Siso CallManager][ClusterID=c6801ccm][NodeID=c6801011ccm007]: A device attempted to register but did not complete registration 0.0.3.1 0 0 2023-03-27T00:00:00+00:00 10.10.33.15 Mar 27 2023 00:00:00.144 UTC : %UC_Test-4-DeviceTransientConnection: %[ConnectingPort=2000][DeviceName=ANF000673BC20003][IPAddress=10.70.56.248][DeviceType=30027][Reason=3][Protocol=SCCP][IPAddrAttributes=2][UNKNOWN_PARAMNAME:LastSignalReceived=StationRegister][UNKNOWN_PARAMNAME:StationState=wait_register][AppID=Siso CallManager][ClusterID=c6801ccm][NodeID=c6801011ccm007]: A device attempted to register but did not complete registration 0.0.3.1 0 0 2023-03-27T00:00:00+00:00 10.10.33.15 Mar 27 2023 00:00:00.147 UTC : %UC_Test-4-DeviceTransientConnection: %[ConnectingPort=2000][DeviceName=AN00A13274B800D][IPAddress=10.108.2.248][DeviceType=30027][Reason=3][Protocol=SCCP][IPAddrAttributes=2][UNKNOWN_PARAMNAME:LastSignalReceived=StationRegister][UNKNOWN_PARAMNAME:StationState=wait_register][AppID=Siso CallManager][ClusterID=c6801ccm][NodeID=c6801011ccm007]: A device attempted to register but did not complete registration 0.0.3.1 0 I need to filter the data before pushing it to the Splunk indexer, with respect to UC_Test-4-DeviceTransientConnection and Reason=3 which means I don't want to push the data which have UC_Test-4-DeviceTransientConnection and Reason=3. I have tried blacklisting it in inputs.conf blacklist = ^.*UC_Test-4-DeviceTransientConnection.*\[Reason=3\].*$ above isn't working then I have tried with props.conf and transforms.conf like below [testsys] TRUNCATE = 0 TRANSFORMS-NULL = setnull [setnull] REGEX = ^.*UC_Test-4-DeviceTransientConnection.*\[Reason=3\].*$ DEST_KEY = queue FORMAT = nullQueue But unfortunately, it's still not filtering.
Hi team,   I am observing high CPUusage touching 100% after splunk upgraded to 9.0.2 , is this expected ? If so , please share the recommended remedy .   Thanks Sathish S
Hi splunkers,  I am trying to create table visualization in splunk classic dashboard. My table has many columns, but only  three(3) columns task_link, sight_link, and feedback_link which have hyper... See more...
Hi splunkers,  I am trying to create table visualization in splunk classic dashboard. My table has many columns, but only  three(3) columns task_link, sight_link, and feedback_link which have hyperlinks. I created drilldown for task_link column, but it is activating row, instead of clicked cell.  How to create drilldown for task_link, sight_link and feedback_link, and it should open only clicked cell?     <dashboard theme="light"> <label>dashboard_classic</label> <row> <panel> <title>Report</title> <table> <title>Report</title> <search> <query> index=* source IN (*) st_id IN (*) component IN (*) tc_name IN (*) | eval TruePredScore=round(TruePredScore, 2) | table "st_id" "component" "task_link" "sight_link" "TruePredScore" "PredictType" "feedback_link" </query> <earliest>-10h@m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> <refresh>10m</refresh> <refreshType>delay</refreshType> </search> <drilldown> <link target="_blank">$row.task_link|n$</link> </drilldown> <option name="count">6</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </dashboard>          
I upgraded Splunk Enterprise from 8.2.3-->9.0.3 with a a clustered index environment. I found certificates were created in the local computer store after the upgrade on all of my Splunk Servers. All ... See more...
I upgraded Splunk Enterprise from 8.2.3-->9.0.3 with a a clustered index environment. I found certificates were created in the local computer store after the upgrade on all of my Splunk Servers. All of my Splunk Servers are Windows and I used a custom CA to create my server.pem. SSL still works between client and indexers. What can I do to remove these new certificates?
Hi, I just want is there any to forward mongodb atlas logs to splunk ? Do we have this feature of forwarding mongodb atlas logs to splunk?
Hello, Newish to splunk here. We have an AWX instance (free Tower) and we are trying to send the logs to splunk using this link: ansible-logs-splunk All is good there.  I can do a tcpdump and... See more...
Hello, Newish to splunk here. We have an AWX instance (free Tower) and we are trying to send the logs to splunk using this link: ansible-logs-splunk All is good there.  I can do a tcpdump and see data going out to port 8088 on my splunk management server. I used this link to set up HEC on Splunk Enterprise 9.0.2: HEC I can run the curl -k ..... test and get:  RETURNS: {"text":"Success","code":0} So things seem ok.  When I try a search, I get nothing.  We've using the default index. Any ideas? Thanks, Aaron
I have been trying to create this sourcetype and am not sure I'm capturing it correctly.     Sample date:      [2023-03-26T14:06:06.356-04:00] Regex Breakdown:    \[\d{4}-\d{2}-\d{2}.\d{2}:\d{2... See more...
I have been trying to create this sourcetype and am not sure I'm capturing it correctly.     Sample date:      [2023-03-26T14:06:06.356-04:00] Regex Breakdown:    \[\d{4}-\d{2}-\d{2}.\d{2}:\d{2}:\d{2}.\d{3}-\d{2}:\d{2}] Timestamp:    %Y-%m-%d{2}\T\d{2}:%H%:%M.%S.%N-\d{2}:\d{2} But I'm having issues with the timestamp value.  I've not run into one that has no breaks in it before.  Any help will be much appreciated.
I'd like to display the "user count" on a timechart over a 30 day period such that even when only a single day has a count above zero, my line graph will still look like a colored line moving along t... See more...
I'd like to display the "user count" on a timechart over a 30 day period such that even when only a single day has a count above zero, my line graph will still look like a colored line moving along the base of the x axis until a single spike appears on the day there was a count about zero. Without this, I simply get a dot on a blank page. I could use a bar graph, but even so, it provides no perspective since the left and right limits (day1 and day 30) dont even show a date value my search... index=myindex action="what im looking for" | bin span=1d _time  | stats DC(user) as "user_count" by _time
Hi we have a Cloud Developer Edition account, and we want to test our app.   We have listed them below: 1. The searching and reporting page is giving an error. (PFA searchandreport_error.png) ... See more...
Hi we have a Cloud Developer Edition account, and we want to test our app.   We have listed them below: 1. The searching and reporting page is giving an error. (PFA searchandreport_error.png) 2. The indexes are not showing in the Indexes page. (PFA indexpage_error.png) 3. The support page of our tenant is also giving us an issue. It tells us to 'contact the admin'. (PFA splunksupport_error.png)   Can you help us to find the solution?         Thank you.
Hey guys, So i'm having issues with getting my Splunk web 8.2.7.1 open on my local Windows server after placing my SSL certs. I followed the documentation surrounding it. Took the passphrase off my... See more...
Hey guys, So i'm having issues with getting my Splunk web 8.2.7.1 open on my local Windows server after placing my SSL certs. I followed the documentation surrounding it. Took the passphrase off my private key, got my web.conf pointing to my privatekey and servercert, but I still get 'this page can't be displayed' when opening Splunk. [settings] enableSplunkWebSSL = true httpport = 443 privKeyPath = D:\DEV\Splunk\etc\auth\mycerts\SplunkCert.key serverCert = D:\DEV\Splunk\etc\auth\mycerts\SplunkCert.nokey.pem startwebserver = 1   Even checking the web service log I don't see any errors that would prevent it. Could you guys help me please?  
Hi team, Recently I have upgraded splunk 8.1.4 to 9.0.2 on all my instances , upgrade success and all components and its members and up,Healthy . today I logged into one of the indexer and lookin... See more...
Hi team, Recently I have upgraded splunk 8.1.4 to 9.0.2 on all my instances , upgrade success and all components and its members and up,Healthy . today I logged into one of the indexer and looking for one app under slave-app folder but the slave-app folder is not showing under etc folder . Is it happened due to upgrade or any other chances ? Can anyone help on this .
Hello everyone, I am again asking for your valuable help. I received this notification by mail, which I do not understand at all. I refer to the link you share for documentation and I am still ... See more...
Hello everyone, I am again asking for your valuable help. I received this notification by mail, which I do not understand at all. I refer to the link you share for documentation and I am still lost. I don't know if I don't know anything or if Splunk assumes that everyone knows the tool from top to bottom.   Hello Splunk Admin, The Upgrade Readiness App detected 2 apps with deprecated jQuery on the https://xxxxxx.splunkcloud.com:443 instance. The Upgrade Readiness App detects apps with outdated Python or jQuery to help Splunk admins and app developers prepare for new releases of Splunk in which lower versions of Python and jQuery are removed. For more details about your outdated apps, see the Upgrade Readiness App on your Splunk instance listed above. To address the issues detected by the Upgrade Readiness App, work with app developers to update their apps to use only Python 3 or higher and jQuery 3.5 or higher. For more information about addressing issues with outdated apps, removing lower versions of Python or jQuery, and how to manage these emails, see https://docs.splunk.com/Documentation/URA.    
Hi I am using Eventgen to create metric data. I have it working for events. I want to get up a very basic example timestamp and metric with the basic value, below, but I am getting an error messa... See more...
Hi I am using Eventgen to create metric data. I have it working for events. I want to get up a very basic example timestamp and metric with the basic value, below, but I am getting an error message.       The metric event is not properly structured, source=bcgames, sourcetype=addons, host=buttercup, index=bcg_eventgen_metrics. Metric event data without a metric name and properly formated numerical values are invalid and cannot be indexed. Ensure the input metric data is not malformed, have one or more keys of the form "metric_name:<metric>" (e.g..."metric_name:cpu.idle") with corresponding floating point values.               [sample.lab2data] interval = 2m earliest = -2m latest = now backfill = -1d outputMode = metric_httpevent index = bcg_eventgen_metrics host = buttercup source = bcgames sourcetype = sales:addons token.0.token = !timestamp! token.0.replacementType = timestamp token.0.replacement = %H:%M:%S %b-%d-%Y token.1.token = !1! token.1.replacementType = random token.1.replacement = integer[1:3]       sample.lab2.data       timestamp=!timestamp! metric_name:cpu.idle=!1!        
I'm posting a json struct such as        { "index": "test_metrics", "time": 1679920906.0, "event": "metric", "host": "myhostname", "source": "build.mybuildplan", "sourcetype": "tr... See more...
I'm posting a json struct such as        { "index": "test_metrics", "time": 1679920906.0, "event": "metric", "host": "myhostname", "source": "build.mybuildplan", "sourcetype": "trace_profile", "fields": { "metric_name:metric1": 1234, "metric_name:metric2": 1234, "metric_name:metric3": 1234, ... "metric_name:metricN": 1234 } }       I noticed that on our splunk enterprise server, I can successfully post it, but the source, host, and sourcetype fields are not visible in Splunk (version 9.0.1). After some debugging on a local Splunk install I found that when I reduce N enough, these fields suddenly come through. Moreover, when I find the largest N for which these fields are shown properly and then make the name of the last metric longer (e.g. "metric_name:metricN_lorem_ipsum_etc"), it also starts to drop these fields. So it looks like it's related to the length of all metric names in the json combined? My questions: - Has anyone else experienced this? - What's the magic limit I'm hitting here? - Most importantly: Why can't I see any error message anywhere? It seems to be silently dropping some info. Is this a bug that could be fixed?
This is the correlation search I currently have     index=honeypot sourcetype=cowrie | table _time, username, src_ip, eventid, message | where eventid!="cowrie.log.closed" | where src_ip!="1... See more...
This is the correlation search I currently have     index=honeypot sourcetype=cowrie | table _time, username, src_ip, eventid, message | where eventid!="cowrie.log.closed" | where src_ip!="10.11.13.29"       Example events: _time username src_ip eventid message 2023-03-22 14:25:43   10.12.8.180 hny.command.input CMD: exit 2023-03-22 14:25:41 root 10.12.8.180 hny.login.success login attempt [root/admin] succeeded 2023-03-22 14:25:38   10.12.8.180 hny.session.connect New connection: 10.12.8.180:2303 (10.11.131.199:2222) [session: 520a4f7b0870] 2023-03-22 14:25:00   10.12.8.180 hny.command.input CMD: 2023-03-22 14:25:00   10.12.8.180 hny.command.input CMD:   The correlation search runs every hour and, for the example events shown above, the search is putting out 5 of the same notables (one for each event). How can I have only one notable for each hour? I tried using stats and counting by src_ip but that only returns the fields that have a username.
Hi, Looking for SPL like within a brief span of time, say two hours, a user prompts alerts for both PDM and encrypted files. thanks
Hi There, I am new to Splunk and am currently trying to get Windows Services data into Splunk. I am using Splunk Cloud and already have Windows Event Log data being ingested via a Universal Forward... See more...
Hi There, I am new to Splunk and am currently trying to get Windows Services data into Splunk. I am using Splunk Cloud and already have Windows Event Log data being ingested via a Universal Forwarder. I was attempting to make use of a search via Splunk Security Essentials and saw the following: Unfortunately, when trying to find help online or on Splunk Docs, I only saw solutions about changing inputs.conf. However, as I am on Splunk Cloud, I do not know if this would be possible. Any help would be appreciated, Jamie