All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all! I'm trying to create a table with case_number and session as the two columns.  Any event without a case_number won't show up in the table. How do I get them to show up?    index=cui botId... See more...
Hi all! I'm trying to create a table with case_number and session as the two columns.  Any event without a case_number won't show up in the table. How do I get them to show up?    index=cui botId=123456789 case_number=* session=* | table case_number session    I tried using | fields case_number instead, but this didn't work either.  Appreciate any help! 
Hi, I have a number of raw logs that I need to extract some fields from. When I go to "Event Actions" and then "Extract Fields", I normally get the following: However, I am dealing with ... See more...
Hi, I have a number of raw logs that I need to extract some fields from. When I go to "Event Actions" and then "Extract Fields", I normally get the following: However, I am dealing with a number of logs for one index where I get this instead and I cannot extract anything: How can I extract fields in this case? Thanks, Patrick
Hi, I don't know is this question was previously addressed by the users who asked about multi-stage Sankey diagrams or user flow displaying (classical marketing scenario of web-users navigating in a ... See more...
Hi, I don't know is this question was previously addressed by the users who asked about multi-stage Sankey diagrams or user flow displaying (classical marketing scenario of web-users navigating in a webshop from a start page to the final cart page and spotting the drop-out locations). It is though a valid diagramming scenario and has been made very popular by various analytics platform like Teradata or Qlik and has been also named as Path Sankey by various developers (https://github.com/DaltonRuer/PathSankey). The QlikSense implementations are also based on modified versions of d3.js similar to the Splunk app. The closest request that someone posted here is maybe here https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-search-and-aggregate-user-behavior-data-in-a/m-p/482333 (since I am a beginner Splunker it migt be the same topic) Basically it extends the two-layers source-target concept of standard d3.js to source-multiple_layers-target. The best example could be seen here https://bl.ocks.org/jeinarsson/e37aa55c3b0e11ae6fa1 and one can imagine that the number of layers / nodes can be actually limited only by the CPU power and RAM (although javascript limitations exist in almost all browsers). A practical example (from my field of interest)) would be this: Suppose that we have a hospital with five units thorough which the patient must pass (not mandatory through all of them) and we want to see the patient referral flow between the doctors from this units; we would have for example 1000 patient IDs and for each of them we would have various flows based on referral from the first unit doctor to the last one he sees (of course not necessarily in the alphabetical order and not always five referrals) so we would display 5 layers in the Sankey chart, each layer displaying in a vertical manner the corresponding doctor names of the unit as nodes with a node thickness according to the number of incoming links from the previous linking nodes equal to count(patiend_id). It would be the same as https://bl.ocks.org/jeinarsson/e37aa55c3b0e11ae6fa1 but with 5 layers and an variable number of nodes according to the inputlookup set. If anybody knows a way how to tweak the current Sankey app search  | inputlookup referrals.csv | stats count(patient_id) by 1st_Reffering_Layer 2nd_Referring_Layer ---maybe ??--- | stats count(patient_id) by 2st_Reffering_Layer 3rd_Referring_Layer ???   | stats count(patient_id) by 3rd_Reffering_Layer 4th_Referring_Layer ??? | stats count(patient_id) by 4th_Reffering_Layer 5th_Referring_Layer If the solution from https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-search-and-aggregate-user-behavior-data-in-a/m-p/482333 is exactly what I am asking for above please advise. Thank you
Hey Splunkers!!! We are planning to deploy DB connect app to get the data from Oracle Database and for the same I have below queries related to Splunk DBConnect App. Please assist. 1) Can we incr... See more...
Hey Splunkers!!! We are planning to deploy DB connect app to get the data from Oracle Database and for the same I have below queries related to Splunk DBConnect App. Please assist. 1) Can we increase data fetch limit from 300 to 1000 or any other higher value? fetch_size = <integer> # optional # The number of rows to return at a time from the database. The default is 300. 2) Will this increased fetch size in db_inputs.conf will effect the Database Performance? 3) Does this fetch limit differs depending upon the databases? 4) If we are having existing scripts to get the same information using some other tool, then what will be the advantages of having SplunkDB Connect App over it?
We have a setup where the AWS KMS logs are sent to Splunk HEC through below flow. We are getting JSON event format but don't see the data to be doing necessary field aliasing and tagging on SH to be ... See more...
We have a setup where the AWS KMS logs are sent to Splunk HEC through below flow. We are getting JSON event format but don't see the data to be doing necessary field aliasing and tagging on SH to be CIM compatible for Authentication & Change Datamodel. I already installed Splunk_TA_aws on both SH & HF. KMS -> Kinesis Firehosse -> logstash function -> Splunk HEC (using aws:cloudtrail sourcetype) Should I be using a different source type for this data source and data format sent through my flow? Can anyone advise if worked with KMS data for AWS.
Hello All,  After installing IT essentials Work app in Splunk from Apps dropdown  I am getting the below error while trying to launch this app.  Although whenever I am trying to hit the URL ... See more...
Hello All,  After installing IT essentials Work app in Splunk from Apps dropdown  I am getting the below error while trying to launch this app.  Although whenever I am trying to hit the URL individually for IT essentials Work App Entity management , infrastructure overview tabs are working fine and data is coming up while hitting the individual URLs like entity management , infrastructure overview .etc.  Could you please help here with insights why ITE work app is not launching while trying to start it from Apps drilldown?   Thanks.
Hello.  Our organization has one of our Data Model (DM) searches for ES regularly running over 200 seconds to complete.  Soon another source will be added to the DM, so I have been looking for way... See more...
Hello.  Our organization has one of our Data Model (DM) searches for ES regularly running over 200 seconds to complete.  Soon another source will be added to the DM, so I have been looking for ways to reduce the runtime.   I came across a site that suggested the macros that build the CIM DM's could be faster by adding the sourcetype alongside the index in the search.  My thought was, "Why stop there?"  You could add the source too, as long as it doesn't iterate with a date or some random number scheme.  And even then, with reasonable wildcarding at the end I believe there would be a performance improvement. I was told that this effort is unnecessary, even though in my unofficial tests over the same period, I found my modified searches to nearly twice as fast.  So aside from the additional effort to build those searches and maintain them, why is this unnecessary?   Thanks AJ
Hi Community, I have the need to filter data based on a specific field value and route to a different group of indexers. Data is coming through HEC configured on a Heavy Forwarder like this:   ... See more...
Hi Community, I have the need to filter data based on a specific field value and route to a different group of indexers. Data is coming through HEC configured on a Heavy Forwarder like this:   [http://tokenName] index = main indexes = main outputgroup = my_indexers sourcetype = _json token = <string> source = mysource   I'd like to use props.conf and transforms.conf as suggested here like this:   props.conf [source::mysource] TRANSFORMS-routing=otherIndexersRouting transforms.conf [otherIndexersRouting] REGEX=\"domain\"\:\s\"CARD\" DEST_KEY=_TCP_ROUTING FORMAT=other_indexers   In outputs.conf I'd add the stanza [tcpOut:other_indexers]   Is this possible? Is there another way to achieve this goal?   Thank you Marta
Would like a way to create a drop down with add and remove choices that will then remove or add the user from the lookup table. So far I have: <input type=“dropdown” token=“dropdown_tok” searchWhen... See more...
Would like a way to create a drop down with add and remove choices that will then remove or add the user from the lookup table. So far I have: <input type=“dropdown” token=“dropdown_tok” searchWhenChanged=“false”> <label>Action</label> <choice value=“add”>Add</choice> <choice value=“remove”>Remove</choice> <choice value=“reauthorize”>Add</choice> <search> <query> </query> </search> any help would be great!
hello,   Q1 While configuring #splunk_itsi KPI, under the thresholding section there is an option to Enable KPI Alerting. As below, the notable event is created when the severity changes from ... See more...
hello,   Q1 While configuring #splunk_itsi KPI, under the thresholding section there is an option to Enable KPI Alerting. As below, the notable event is created when the severity changes from any lower level to critical.   My question is that if there is a way to trigger a notable event, when the status is critical, regardless of the state it was before. In other words, when the severity remains critical from the 1st check point to the second check point, i need a notable event to be created in this case as well, is that possible ?.   Q2 After configuring #splunk_itsi correlation search as described here , i wasn't able to see notable events created in the episode review. I have already configured the search in the correlation search, and added associated services, so the final search is as below: index="itsi_summary" kpi IN ("SH * RAM Static","SH * CPU Adaptive","SH * CPU Static","SH * RAM Adaptive","SH * SWAP") alert_level>1 | `filter_maintenance_services("400f819c-f739-4ffc-a25c-86d48362fef8,917c4030-a422-4645-851e-a5b2b5c7f3cd,7fb610b4-15f2-4d21-b035-b4857c9effef,28aa0103-fb41-4382-ab07-c637c16d3d85,bfe94d80-daf5-43b8-8318-dc881fd30128,b3c8562a-d1d6-465a-b0c7-4a28ba7f4612,225e7eb6-2f7c-4f0f-9221-75b1e8471053,a0826af0-2100-44a4-9b51-558bff966bb7,dcb38bc4-e930-4776-92a8-5de0d50cdc5e,721cb2c5-43fa-4419-9dde-a33a467d7770,328b9170-18d3-4b50-9968-01b1e087f955")` When i run the search it returns the events, so i am not expecting something wrong in the search query. What am i missing, in order to get the notable events visible in the episodes review tab?   Appreciate your help.
Hi Team, We are trying below search:   index=index_123 host=xyz source="/sys_apps_01/pqr/logs/xyz/mapper_xyz.log" ContextKeyMatch: Context Field Value   which gives below results with mult... See more...
Hi Team, We are trying below search:   index=index_123 host=xyz source="/sys_apps_01/pqr/logs/xyz/mapper_xyz.log" ContextKeyMatch: Context Field Value   which gives below results with multiple rows as below: Now we want to extract data after Context Filed Value. The string having "Context Filed Value" is of variable string length We have multiple rows like above and we need to extract such data from each row. like : 005436213114023275. Once we have the extracted data we need to fetch only  last 12 digits Could you please suggest regarding this?
Say suppose we have data for the below date and time range, i want to pick only sunday's date and display the last 3 weeks sundays data only. so basically, it should pick only the sunday's date from ... See more...
Say suppose we have data for the below date and time range, i want to pick only sunday's date and display the last 3 weeks sundays data only. so basically, it should pick only the sunday's date from input data and display it in the output data. input data            output data(sunday date) 2022-04-24 09:00:03   2022-04-24 09:00:03 2022-04-22 12:50:08   2022-04-17 12:34:26 2022-04-17 12:34:26   2022-03-27 15:49:59 2022-03-28 09:41:12   2022-03-20 11:07:21 2022-03-27 15:49:59   2022-03-20 11:07:21 2022-03-25 15:31:18     2022-03-25 15:00:32     2022-03-25 14:45:03     2022-03-20 13:28:54     2022-03-20 11:07:21     2022-03-10 16:11:32     2022-03-10 14:31:15    
Don't know if this is the right location to ask this, but i do wonder .... I see that web_access.log is as described below: web_access.log =>> config location \Splunk\etc\system\default\we... See more...
Don't know if this is the right location to ask this, but i do wonder .... I see that web_access.log is as described below: web_access.log =>> config location \Splunk\etc\system\default\web.conf # HTTP access log filename log.access_file = web_access.log # Maximum file size of the access log, in bytes log.access_maxsize = 25000000 # Maximum number of rotated log files to retain log.access_maxfiles = 5 But for the metrics.log, i only find this: [source::...\\var\\log\\splunk\\metrics.log(.\d+)?] sourcetype = splunkd [source::...\\token_input_metrics.log(.\d+)?] sourcetype = token_endpoint_metrics [source::...\\http_event_collector_metrics.log(.\d+)?] sourcetype = http_event_collector_metrics What and from where should i read more info? Thnx.
(Single/standalone instance of splunk) I have been in a fight with these events for over a week now. I was hoping eventually my failures would add up to a glorious success, but it turns out that I ... See more...
(Single/standalone instance of splunk) I have been in a fight with these events for over a week now. I was hoping eventually my failures would add up to a glorious success, but it turns out that I am finding EVEN MORE FAILURES. So many more. I am getting data from a source that provides single line json events. I have a few problems here: My JSON data has a consistent field located at ["event"]["original"], BUT the contents of .original often contain more nested data, which is breaking my regexes. I keep making new ones for each new "shape" I find, but it just seems tedious when the json contains it all nice and neat for me. Props:     [source::http:kafka_iap-suricata-log] LINE_BREAKER = (`~!\^<) SHOULD_LINEMERGE = false TRANSFORMS-also = extractSuriStats, extract_suri_protocol_msg, extractMessage​     Transforms:     [extractMessage] REGEX = "original":([\s\S]*?})}," LOOKAHEAD=100000 DEST_KEY= _raw FORMAT = $1 WRITE_META = true [extractSuriStats] REGEX = "event_type":"stats"[\s\S]+({"event_type":"stats".+})}} LOOKAHEAD=100000 DEST_KEY= _raw FORMAT = $1 WRITE_META = true [extract_suri_protocol_msg] REGEX = "original":([\s\S]*})}," LOOKAHEAD=100000 DEST_KEY= _raw FORMAT = $1 WRITE_META = true [sourcetyper] LOOKAHEAD = 100000 ​     This is fragile, and keeps breaking when a new "nested" shape comes through.  Now, lets assume the above works, but then BAM an event comes through with a payload of 47000 characters of "\\0" contained in the json. My above extractions continue to work, but the events themselves no longer parse at (searchtime?). I have pretty json, buy no key/value pairs that I can act off of. Ok, I think! What if i just replace the payload with --deleted--! Well, sedcmd seems to not apply too terribly often, and I wonder if it has the same character limitation but I dont see a limit to configure for it. My seds:     [source::http:kafka_iap-suricata-log] LINE_BREAKER = (`~!\^<) SHOULD_LINEMERGE = false SEDCMD-payload = s/payload_printable":([\s\S]*)",/ ---payload string has been truncated by splunk admins at index time--- /g SEDCMD-response = s/http_response_body_printable":([\s\S]*)"}/ ---payload string has been truncated by splunk admins at index time--- /g SEDCMD-fluff = s/(?:\\\\0){20,}/ ---html string has been truncated by splunk admins at index time--- /g TRANSFORMS-also = extractSuriStats, extract_suri_protocol_msg, extractMessage       What I would much prefer to do is again, just work with the json directly. But I dont think that is possible. My frustration continues, so I think what if i intercept the JSON and throw python things at it! I see a few references to using the unarchive_cmd , and get an idea...     #!/usr/bin/python import json import sys def ReadEvent(jsonSingleLine): data = json.loads(jsonSingleLine) return data def FindOriginalEvent(data): if 'event' in data: if 'original' in data['event']: originalEvent = data["event"]["original"] return originalEvent while True: fromSplunk = sys.stdin.readline() if not len(fromSplunk): break eventString = json.dumps(FindOriginalEvent(ReadEvent(fromSplunk))) sys.stdout.write(eventString) sys.stdout.flush() sys.exit()     Props:     [source::http:kafka_iap-suricata-log] LINE_BREAKER = (`~!\^<) SHOULD_LINEMERGE = false unarchive_cmd = /opt/splunk/etc/apps/stamus_for_splunk/bin/parse_suricata.py [(?::){0}suricata:*] invalid_cause = archive unarchive_cmd = /opt/splunk/etc/apps/stamus_for_splunk/bin/parse_suricata.py [suricata] invalid_cause = archive unarchive_cmd = /opt/splunk/etc/apps/stamus_for_splunk/bin/parse_suricata.py      (i put it everywhere, to make sure it would work.)  The code is ugly and useless. **bleep**. Art imitates life today.... So I am left with either: A bunch of regexes and sedcmds that break when the event is too long A custom script that I am apparently wrong on.  Which direction do I focus my attention? Any suggestions would be a huge help. Sample event:     {"destination": {"ip": "xxx","port": 443,"address": "xxx"},"ecs": {"version": "1.12.0"},"host": {"name": "ptm-nsm"},"fileset": {"name": "eve"},"input": {"type": "log"},"suricata": {"eve": {"http": {"http_method": "CONNECT","hostname": "xxx","status": 200,"length": 0,"http_port": 443,"url": "xxx","protocol": "HTTP/1.0","http_user_agent": "Mozilla/4.0 (compatible;)"},"payload_printable": "xxxxx","alert": {"metadata": {"updated_at": ["2021_11_24"],"created_at": ["2011_12_08"]},"category": "A Network Trojan was detected","gid": 1,"signature": "ET TROJAN Fake Variation of Mozilla 4.0 - Likely Trojan","action": "allowed","signature_id": 2014002,"rev": 10,"severity": 1,"rule": "alert http $HOME_NET any -> $EXTERNAL_NET any (msg:\"ET TROJAN Fake Variation of Mozilla 4.0 - Likely Trojan\"; flow:established,to_server; content:\"Mozilla/4.0|20 28|compatible|3b 29|\"; http_user_agent; fast_pattern; isdataat:!1,relative; content:!\".bluecoat.com\"; http_host; http_header_names; content:!\"BlueCoat\"; nocase; threshold:type limit, track by_src, count 1, seconds 60; classtype:trojan-activity; sid:2014002; rev:10; metadata:created_at 2011_12_08, updated_at 2021_11_24;)"},"packet": "RQA==","stream": 1,"flow_id": "769386515195888","app_proto": "http","flow": {"start": "2022-05-10T10:43:58.911344+0000","pkts_toclient": 3,"pkts_toserver": 4,"bytes_toserver": 1102,"bytes_toclient": 245},"event_type": "alert","tx_id": 0,"packet_info": {"linktype": 12}}},"service": {"type": "suricata"},"source": {"ip": "xxx","port": 64391,"address": "xxx"},"log": {"offset": 1062706606,"file": {"path": "/opt/suricata/eve.json"}},"network.direction": "external","@timestamp": "2022-05-10T10:43:59.106Z","agent": {"hostname": "xxx","ephemeral_id": "xxx","type": "filebeat","version": "7.16.2","id": "xxx","name": "ptm-nsm"},"tags": ["iap","suricata"],"@version": "1","event": {"created": "2022-05-10T10:43:59.340Z","module": "suricata","dataset": "suricata.eve","original": {"http": {"http_method": "CONNECT","hostname": "xxx","status": 200,"url": "xxx:443","http_port": 443,"length": 0,"protocol": "HTTP/1.0","http_user_agent": "Mozilla/4.0 (compatible;)"},"dest_port": 443,"payload_printable": "CONNECT xxx:443 HTTP/1.0\r\nUser-Agent: Mozilla/4.0 (compatible;)\r\nHost: xxx\r\n\r\n","alert": {"metadata": {"updated_at": ["2021_11_24"],"created_at": ["2011_12_08"]},"category": "A Network Trojan was detected","gid": 1,"action": "allowed","signature": "ET TROJAN Fake Variation of Mozilla 4.0 - Likely Trojan","signature_id": 2014002,"rev": 10,"severity": 1,"rule": "alert http $HOME_NET any -> $EXTERNAL_NET any (msg:\"ET TROJAN Fake Variation of Mozilla 4.0 - Likely Trojan\"; flow:established,to_server; content:\"Mozilla/4.0|20 28|compatible|3b 29|\"; http_user_agent; fast_pattern; isdataat:!1,relative; content:!\".bluecoat.com\"; http_host; http_header_names; content:!\"BlueCoat\"; nocase; threshold:type limit, track by_src, count 1, seconds 60; classtype:trojan-activity; sid:2014002; rev:10; metadata:created_at 2011_12_08, updated_at 2021_11_24;)"},"packet": "RQAAKAA9ZMAAA==","stream": 1,"flow_id": 769386515195888,"proto": "TCP","app_proto": "http","src_port": 64391,"dest_ip": "xxx","event_type": "alert","flow": {"start": "2022-05-10T10:43:58.911344+0000","pkts_toserver": 4,"pkts_toclient": 3,"bytes_toserver": 1102,"bytes_toclient": 245},"timestamp": "2022-05-10T10:43:59.106396+0000","tx_id": 0,"src_ip": "xxx","packet_info": {"linktype": 12}}},"network": {"transport": "TCP","community_id": "Ns="}}    
Hi I am trying to request metric data from my controller using metric-data rest api. Though, the frequency of data points is showing some inconsistent behaviour while giving time frame of current da... See more...
Hi I am trying to request metric data from my controller using metric-data rest api. Though, the frequency of data points is showing some inconsistent behaviour while giving time frame of current data and older dates. e.g. Fetch Business Transactions' Calls per minute metric data for 15 minute frame in today. Frequency - ONE_MIN in response and number of data points is 15 which is accurate. But for older date, say 9th May 2022 data being requested on 11th May 2022 for a data frame of 15 mins, similar to above scenario. Then, Frequency - TEN_MIN in response and only 1 data point which is inaccurate. Note: rollup is false in both cases. https://xx.saas.appdynamics.com/controller/rest/applications/xx/metric-data?metric-path=Service+Endpoints%7Cxx%7C%2Fxx%7CIndividual+Nodes%7Cbaseapp%7CCalls+per+Minute&time-range-type=BETWEEN_TIMES&start-time=1652107140000&end-time=1652108040000&output=json&rollup=false [ { "metricId": 2960604, "metricName": "BTM|Application Diagnostic Data|SEP:211315|Calls per Minute", "metricPath": "Service Endpoints|xx|/xx|Individual Nodes|baseapp|Calls per Minute", "frequency": "TEN_MIN", "metricValues": [ { "startTimeInMillis": 1652107200000, "occurrences": 1, "current": 141, "min": 0, "max": 167, "useRange": true, "count": 10, "sum": 1264, "value": 126, "standardDeviation": 0 } ] } ]  Urgent help required over this one...
is there a way to get the data in json format into the KV Store in one go  using "storage/collections/data/{collection}/" API endpoint?   10000 lines of events in one go ?
Hi! Im running Splunk DB Connect 3.6.0 on my HF (ver 8.0.9) and having some issues with one of my inputs. Im trying to index licens usage data from Appdynamics into Splunk with the query below. It ru... See more...
Hi! Im running Splunk DB Connect 3.6.0 on my HF (ver 8.0.9) and having some issues with one of my inputs. Im trying to index licens usage data from Appdynamics into Splunk with the query below. It runs fine in GUI and i can see the results and i dont get any errors completing the input guide. Checked the splunk_app_db_connect_audit_command.log for errors, but it logs "state=success" However, splunk_app_db_connect_job_metrics.log says "read_count=0 write_count=0 error_count=0" this is the only input against mysql. Any ideas?         SELECT usage_host.account_id AS AccountID, usage_host.host_id AS UniqueHostID, usage_host.is_fallback_host AS FallbackHost, usage_host.virtual_cpus AS vCPUcount, host_leased_units.usageUnits AS AccountHostLeasedUnits, if(usage_host.is_fallback_host, usage_lease.account_units, 0) AS AccountLicenseEntityLeasedUnits, conf_package.id AS AccountLicensePackage, usage_license_entity.agent_type AS AgentType, from_unixtime(usage_license_entity.register_date) AS LeaseDate, usage_allocation_package.allocation_name AS LicenseRule, from_unixtime((floor(unix_timestamp() / 300) * 300)) AS SnapshotValidAt FROM usage_lease JOIN usage_host ON usage_host.id = usage_lease.usage_host_id JOIN usage_allocation_package ON usage_allocation_package.id = usage_lease.usage_allocation_package_id JOIN usage_license_entity ON usage_license_entity.id = usage_lease.usage_license_entity_id JOIN conf_package ON conf_package.int_id = usage_lease.usage_package_id JOIN (SELECT usage_host.host_id, round(sum(usage_lease.account_units)) AS usageUnits FROM usage_lease JOIN usage_host ON usage_host.id = usage_lease.usage_host_id WHERE usage_lease.created_date = (floor(unix_timestamp() / 300) * 300) AND usage_host.account_id = 2 GROUP BY usage_host.host_id) AS host_leased_units ON host_leased_units.host_id = usage_host.host_id WHERE (usage_lease.created_date = (floor(unix_timestamp() / 300) * 300) AND usage_host.account_id = usage_allocation_package.account_id AND usage_allocation_package.account_id = usage_license_entity.account_id AND usage_license_entity.account_id = 2) ORDER BY usage_host.host_id;        
Hi I have this json in my splunk : Serverip, serverRamUsage, TotalRAM, ServiceRAMUsage, serverCPUUsage, TotalCPU, ServiceCPUUsage I want to add to my dashboard what is shown in the picture. But I... See more...
Hi I have this json in my splunk : Serverip, serverRamUsage, TotalRAM, ServiceRAMUsage, serverCPUUsage, TotalCPU, ServiceCPUUsage I want to add to my dashboard what is shown in the picture. But I'm not succeeding.  and also to show in percent which means taking the total cpu/ram and making it the 100%.  
Hi there - I am trying to filter out some noisy rules in a specific firewall (FWCL01) from being ingested into splunk.   On my Heavy forwearder that send into splunk i have applied the following ... See more...
Hi there - I am trying to filter out some noisy rules in a specific firewall (FWCL01) from being ingested into splunk.   On my Heavy forwearder that send into splunk i have applied the following props.conf and transform.conf   PROPS.CONF [host::FWCL01] TRANSFORMS-set_null = FWCL01_ruleid0_to_null, FWCL01_ruleid4_to_null   TRANSFORMS.CONF [FWCL01_ruleid0_to_null] REGEX = policyid=0 DEST_KEY = queue FORMAT = nullQueue [FWCL01_ruleid4_to_null] REGEX = policyid=4 DEST_KEY = queue FORMAT = nullQueue     This doesnt seem to work. However when i change props.conf to us the sourcetype [fgt-traffic] as per below it works [fgt_traffic] TRANSFORMS-set_null = FWCL01_ruleid0_to_null, FWCL01_ruleid4_to_null     The logs show as following: May 11 16:12:54 10.8.11.1 logver=602101263 timestamp=1652256773 devname="FWCL01" devid="XXXXXXX" vd="Outer-DMZ" date=2022-05-11 time=16:12:53 logid="0000000013" type="traffic" subtype="forward" level="notice" eventtime=1652256774280610010 tz="+0800" srcip=45.143.203.10 srcport=8080 srcintf="XXXX" srcintfrole="lan" dstip=XXXX dstport=8088 dstintf="XXXX" dstintfrole="undefined" srcinetsvc="Malicious-Malicious.Server" sessionid=2932531463 proto=6 action="deny" policyid=4 policytype="policy" poluuid="XXXXX" service="tcp/8088" dstcountry="Australia" srccountry="Netherlands" trandisp="noop" duration=0 sentbyte=0 rcvdbyte=0 sentpkt=0 appcat="unscanned" crscore=30 craction=131072 crlevel="high" mastersrcmac="XXXXX" srcmac="XXXXX" srcserver=0 When i use btool it looks like the correct props are being applied D:\Program Files\Splunk\bin>splunk btool props list | findstr FWCL01 [host::FWCL01] TRANSFORMS-set_null = FWCL01_ruleid0_to_null, FWCL01_ruleid4_to_null   Any idea's?
Hi, I tried to install Splunk UF on windows server 2008, but appeared error. You can see the error in the screen shot.    Please your support.   Thanks