All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, We are trying below search:   index=index_123 host=xyz source="/sys_apps_01/pqr/logs/xyz/mapper_xyz.log" ContextKeyMatch: Context Field Value   which gives below results with mult... See more...
Hi Team, We are trying below search:   index=index_123 host=xyz source="/sys_apps_01/pqr/logs/xyz/mapper_xyz.log" ContextKeyMatch: Context Field Value   which gives below results with multiple rows as below: Now we want to extract data after Context Filed Value. The string having "Context Filed Value" is of variable string length We have multiple rows like above and we need to extract such data from each row. like : 005436213114023275. Once we have the extracted data we need to fetch only  last 12 digits Could you please suggest regarding this?
Say suppose we have data for the below date and time range, i want to pick only sunday's date and display the last 3 weeks sundays data only. so basically, it should pick only the sunday's date from ... See more...
Say suppose we have data for the below date and time range, i want to pick only sunday's date and display the last 3 weeks sundays data only. so basically, it should pick only the sunday's date from input data and display it in the output data. input data            output data(sunday date) 2022-04-24 09:00:03   2022-04-24 09:00:03 2022-04-22 12:50:08   2022-04-17 12:34:26 2022-04-17 12:34:26   2022-03-27 15:49:59 2022-03-28 09:41:12   2022-03-20 11:07:21 2022-03-27 15:49:59   2022-03-20 11:07:21 2022-03-25 15:31:18     2022-03-25 15:00:32     2022-03-25 14:45:03     2022-03-20 13:28:54     2022-03-20 11:07:21     2022-03-10 16:11:32     2022-03-10 14:31:15    
Don't know if this is the right location to ask this, but i do wonder .... I see that web_access.log is as described below: web_access.log =>> config location \Splunk\etc\system\default\we... See more...
Don't know if this is the right location to ask this, but i do wonder .... I see that web_access.log is as described below: web_access.log =>> config location \Splunk\etc\system\default\web.conf # HTTP access log filename log.access_file = web_access.log # Maximum file size of the access log, in bytes log.access_maxsize = 25000000 # Maximum number of rotated log files to retain log.access_maxfiles = 5 But for the metrics.log, i only find this: [source::...\\var\\log\\splunk\\metrics.log(.\d+)?] sourcetype = splunkd [source::...\\token_input_metrics.log(.\d+)?] sourcetype = token_endpoint_metrics [source::...\\http_event_collector_metrics.log(.\d+)?] sourcetype = http_event_collector_metrics What and from where should i read more info? Thnx.
(Single/standalone instance of splunk) I have been in a fight with these events for over a week now. I was hoping eventually my failures would add up to a glorious success, but it turns out that I ... See more...
(Single/standalone instance of splunk) I have been in a fight with these events for over a week now. I was hoping eventually my failures would add up to a glorious success, but it turns out that I am finding EVEN MORE FAILURES. So many more. I am getting data from a source that provides single line json events. I have a few problems here: My JSON data has a consistent field located at ["event"]["original"], BUT the contents of .original often contain more nested data, which is breaking my regexes. I keep making new ones for each new "shape" I find, but it just seems tedious when the json contains it all nice and neat for me. Props:     [source::http:kafka_iap-suricata-log] LINE_BREAKER = (`~!\^<) SHOULD_LINEMERGE = false TRANSFORMS-also = extractSuriStats, extract_suri_protocol_msg, extractMessage​     Transforms:     [extractMessage] REGEX = "original":([\s\S]*?})}," LOOKAHEAD=100000 DEST_KEY= _raw FORMAT = $1 WRITE_META = true [extractSuriStats] REGEX = "event_type":"stats"[\s\S]+({"event_type":"stats".+})}} LOOKAHEAD=100000 DEST_KEY= _raw FORMAT = $1 WRITE_META = true [extract_suri_protocol_msg] REGEX = "original":([\s\S]*})}," LOOKAHEAD=100000 DEST_KEY= _raw FORMAT = $1 WRITE_META = true [sourcetyper] LOOKAHEAD = 100000 ​     This is fragile, and keeps breaking when a new "nested" shape comes through.  Now, lets assume the above works, but then BAM an event comes through with a payload of 47000 characters of "\\0" contained in the json. My above extractions continue to work, but the events themselves no longer parse at (searchtime?). I have pretty json, buy no key/value pairs that I can act off of. Ok, I think! What if i just replace the payload with --deleted--! Well, sedcmd seems to not apply too terribly often, and I wonder if it has the same character limitation but I dont see a limit to configure for it. My seds:     [source::http:kafka_iap-suricata-log] LINE_BREAKER = (`~!\^<) SHOULD_LINEMERGE = false SEDCMD-payload = s/payload_printable":([\s\S]*)",/ ---payload string has been truncated by splunk admins at index time--- /g SEDCMD-response = s/http_response_body_printable":([\s\S]*)"}/ ---payload string has been truncated by splunk admins at index time--- /g SEDCMD-fluff = s/(?:\\\\0){20,}/ ---html string has been truncated by splunk admins at index time--- /g TRANSFORMS-also = extractSuriStats, extract_suri_protocol_msg, extractMessage       What I would much prefer to do is again, just work with the json directly. But I dont think that is possible. My frustration continues, so I think what if i intercept the JSON and throw python things at it! I see a few references to using the unarchive_cmd , and get an idea...     #!/usr/bin/python import json import sys def ReadEvent(jsonSingleLine): data = json.loads(jsonSingleLine) return data def FindOriginalEvent(data): if 'event' in data: if 'original' in data['event']: originalEvent = data["event"]["original"] return originalEvent while True: fromSplunk = sys.stdin.readline() if not len(fromSplunk): break eventString = json.dumps(FindOriginalEvent(ReadEvent(fromSplunk))) sys.stdout.write(eventString) sys.stdout.flush() sys.exit()     Props:     [source::http:kafka_iap-suricata-log] LINE_BREAKER = (`~!\^<) SHOULD_LINEMERGE = false unarchive_cmd = /opt/splunk/etc/apps/stamus_for_splunk/bin/parse_suricata.py [(?::){0}suricata:*] invalid_cause = archive unarchive_cmd = /opt/splunk/etc/apps/stamus_for_splunk/bin/parse_suricata.py [suricata] invalid_cause = archive unarchive_cmd = /opt/splunk/etc/apps/stamus_for_splunk/bin/parse_suricata.py      (i put it everywhere, to make sure it would work.)  The code is ugly and useless. **bleep**. Art imitates life today.... So I am left with either: A bunch of regexes and sedcmds that break when the event is too long A custom script that I am apparently wrong on.  Which direction do I focus my attention? Any suggestions would be a huge help. Sample event:     {"destination": {"ip": "xxx","port": 443,"address": "xxx"},"ecs": {"version": "1.12.0"},"host": {"name": "ptm-nsm"},"fileset": {"name": "eve"},"input": {"type": "log"},"suricata": {"eve": {"http": {"http_method": "CONNECT","hostname": "xxx","status": 200,"length": 0,"http_port": 443,"url": "xxx","protocol": "HTTP/1.0","http_user_agent": "Mozilla/4.0 (compatible;)"},"payload_printable": "xxxxx","alert": {"metadata": {"updated_at": ["2021_11_24"],"created_at": ["2011_12_08"]},"category": "A Network Trojan was detected","gid": 1,"signature": "ET TROJAN Fake Variation of Mozilla 4.0 - Likely Trojan","action": "allowed","signature_id": 2014002,"rev": 10,"severity": 1,"rule": "alert http $HOME_NET any -> $EXTERNAL_NET any (msg:\"ET TROJAN Fake Variation of Mozilla 4.0 - Likely Trojan\"; flow:established,to_server; content:\"Mozilla/4.0|20 28|compatible|3b 29|\"; http_user_agent; fast_pattern; isdataat:!1,relative; content:!\".bluecoat.com\"; http_host; http_header_names; content:!\"BlueCoat\"; nocase; threshold:type limit, track by_src, count 1, seconds 60; classtype:trojan-activity; sid:2014002; rev:10; metadata:created_at 2011_12_08, updated_at 2021_11_24;)"},"packet": "RQA==","stream": 1,"flow_id": "769386515195888","app_proto": "http","flow": {"start": "2022-05-10T10:43:58.911344+0000","pkts_toclient": 3,"pkts_toserver": 4,"bytes_toserver": 1102,"bytes_toclient": 245},"event_type": "alert","tx_id": 0,"packet_info": {"linktype": 12}}},"service": {"type": "suricata"},"source": {"ip": "xxx","port": 64391,"address": "xxx"},"log": {"offset": 1062706606,"file": {"path": "/opt/suricata/eve.json"}},"network.direction": "external","@timestamp": "2022-05-10T10:43:59.106Z","agent": {"hostname": "xxx","ephemeral_id": "xxx","type": "filebeat","version": "7.16.2","id": "xxx","name": "ptm-nsm"},"tags": ["iap","suricata"],"@version": "1","event": {"created": "2022-05-10T10:43:59.340Z","module": "suricata","dataset": "suricata.eve","original": {"http": {"http_method": "CONNECT","hostname": "xxx","status": 200,"url": "xxx:443","http_port": 443,"length": 0,"protocol": "HTTP/1.0","http_user_agent": "Mozilla/4.0 (compatible;)"},"dest_port": 443,"payload_printable": "CONNECT xxx:443 HTTP/1.0\r\nUser-Agent: Mozilla/4.0 (compatible;)\r\nHost: xxx\r\n\r\n","alert": {"metadata": {"updated_at": ["2021_11_24"],"created_at": ["2011_12_08"]},"category": "A Network Trojan was detected","gid": 1,"action": "allowed","signature": "ET TROJAN Fake Variation of Mozilla 4.0 - Likely Trojan","signature_id": 2014002,"rev": 10,"severity": 1,"rule": "alert http $HOME_NET any -> $EXTERNAL_NET any (msg:\"ET TROJAN Fake Variation of Mozilla 4.0 - Likely Trojan\"; flow:established,to_server; content:\"Mozilla/4.0|20 28|compatible|3b 29|\"; http_user_agent; fast_pattern; isdataat:!1,relative; content:!\".bluecoat.com\"; http_host; http_header_names; content:!\"BlueCoat\"; nocase; threshold:type limit, track by_src, count 1, seconds 60; classtype:trojan-activity; sid:2014002; rev:10; metadata:created_at 2011_12_08, updated_at 2021_11_24;)"},"packet": "RQAAKAA9ZMAAA==","stream": 1,"flow_id": 769386515195888,"proto": "TCP","app_proto": "http","src_port": 64391,"dest_ip": "xxx","event_type": "alert","flow": {"start": "2022-05-10T10:43:58.911344+0000","pkts_toserver": 4,"pkts_toclient": 3,"bytes_toserver": 1102,"bytes_toclient": 245},"timestamp": "2022-05-10T10:43:59.106396+0000","tx_id": 0,"src_ip": "xxx","packet_info": {"linktype": 12}}},"network": {"transport": "TCP","community_id": "Ns="}}    
Hi I am trying to request metric data from my controller using metric-data rest api. Though, the frequency of data points is showing some inconsistent behaviour while giving time frame of current da... See more...
Hi I am trying to request metric data from my controller using metric-data rest api. Though, the frequency of data points is showing some inconsistent behaviour while giving time frame of current data and older dates. e.g. Fetch Business Transactions' Calls per minute metric data for 15 minute frame in today. Frequency - ONE_MIN in response and number of data points is 15 which is accurate. But for older date, say 9th May 2022 data being requested on 11th May 2022 for a data frame of 15 mins, similar to above scenario. Then, Frequency - TEN_MIN in response and only 1 data point which is inaccurate. Note: rollup is false in both cases. https://xx.saas.appdynamics.com/controller/rest/applications/xx/metric-data?metric-path=Service+Endpoints%7Cxx%7C%2Fxx%7CIndividual+Nodes%7Cbaseapp%7CCalls+per+Minute&time-range-type=BETWEEN_TIMES&start-time=1652107140000&end-time=1652108040000&output=json&rollup=false [ { "metricId": 2960604, "metricName": "BTM|Application Diagnostic Data|SEP:211315|Calls per Minute", "metricPath": "Service Endpoints|xx|/xx|Individual Nodes|baseapp|Calls per Minute", "frequency": "TEN_MIN", "metricValues": [ { "startTimeInMillis": 1652107200000, "occurrences": 1, "current": 141, "min": 0, "max": 167, "useRange": true, "count": 10, "sum": 1264, "value": 126, "standardDeviation": 0 } ] } ]  Urgent help required over this one...
is there a way to get the data in json format into the KV Store in one go  using "storage/collections/data/{collection}/" API endpoint?   10000 lines of events in one go ?
Hi! Im running Splunk DB Connect 3.6.0 on my HF (ver 8.0.9) and having some issues with one of my inputs. Im trying to index licens usage data from Appdynamics into Splunk with the query below. It ru... See more...
Hi! Im running Splunk DB Connect 3.6.0 on my HF (ver 8.0.9) and having some issues with one of my inputs. Im trying to index licens usage data from Appdynamics into Splunk with the query below. It runs fine in GUI and i can see the results and i dont get any errors completing the input guide. Checked the splunk_app_db_connect_audit_command.log for errors, but it logs "state=success" However, splunk_app_db_connect_job_metrics.log says "read_count=0 write_count=0 error_count=0" this is the only input against mysql. Any ideas?         SELECT usage_host.account_id AS AccountID, usage_host.host_id AS UniqueHostID, usage_host.is_fallback_host AS FallbackHost, usage_host.virtual_cpus AS vCPUcount, host_leased_units.usageUnits AS AccountHostLeasedUnits, if(usage_host.is_fallback_host, usage_lease.account_units, 0) AS AccountLicenseEntityLeasedUnits, conf_package.id AS AccountLicensePackage, usage_license_entity.agent_type AS AgentType, from_unixtime(usage_license_entity.register_date) AS LeaseDate, usage_allocation_package.allocation_name AS LicenseRule, from_unixtime((floor(unix_timestamp() / 300) * 300)) AS SnapshotValidAt FROM usage_lease JOIN usage_host ON usage_host.id = usage_lease.usage_host_id JOIN usage_allocation_package ON usage_allocation_package.id = usage_lease.usage_allocation_package_id JOIN usage_license_entity ON usage_license_entity.id = usage_lease.usage_license_entity_id JOIN conf_package ON conf_package.int_id = usage_lease.usage_package_id JOIN (SELECT usage_host.host_id, round(sum(usage_lease.account_units)) AS usageUnits FROM usage_lease JOIN usage_host ON usage_host.id = usage_lease.usage_host_id WHERE usage_lease.created_date = (floor(unix_timestamp() / 300) * 300) AND usage_host.account_id = 2 GROUP BY usage_host.host_id) AS host_leased_units ON host_leased_units.host_id = usage_host.host_id WHERE (usage_lease.created_date = (floor(unix_timestamp() / 300) * 300) AND usage_host.account_id = usage_allocation_package.account_id AND usage_allocation_package.account_id = usage_license_entity.account_id AND usage_license_entity.account_id = 2) ORDER BY usage_host.host_id;        
Hi I have this json in my splunk : Serverip, serverRamUsage, TotalRAM, ServiceRAMUsage, serverCPUUsage, TotalCPU, ServiceCPUUsage I want to add to my dashboard what is shown in the picture. But I... See more...
Hi I have this json in my splunk : Serverip, serverRamUsage, TotalRAM, ServiceRAMUsage, serverCPUUsage, TotalCPU, ServiceCPUUsage I want to add to my dashboard what is shown in the picture. But I'm not succeeding.  and also to show in percent which means taking the total cpu/ram and making it the 100%.  
Hi there - I am trying to filter out some noisy rules in a specific firewall (FWCL01) from being ingested into splunk.   On my Heavy forwearder that send into splunk i have applied the following ... See more...
Hi there - I am trying to filter out some noisy rules in a specific firewall (FWCL01) from being ingested into splunk.   On my Heavy forwearder that send into splunk i have applied the following props.conf and transform.conf   PROPS.CONF [host::FWCL01] TRANSFORMS-set_null = FWCL01_ruleid0_to_null, FWCL01_ruleid4_to_null   TRANSFORMS.CONF [FWCL01_ruleid0_to_null] REGEX = policyid=0 DEST_KEY = queue FORMAT = nullQueue [FWCL01_ruleid4_to_null] REGEX = policyid=4 DEST_KEY = queue FORMAT = nullQueue     This doesnt seem to work. However when i change props.conf to us the sourcetype [fgt-traffic] as per below it works [fgt_traffic] TRANSFORMS-set_null = FWCL01_ruleid0_to_null, FWCL01_ruleid4_to_null     The logs show as following: May 11 16:12:54 10.8.11.1 logver=602101263 timestamp=1652256773 devname="FWCL01" devid="XXXXXXX" vd="Outer-DMZ" date=2022-05-11 time=16:12:53 logid="0000000013" type="traffic" subtype="forward" level="notice" eventtime=1652256774280610010 tz="+0800" srcip=45.143.203.10 srcport=8080 srcintf="XXXX" srcintfrole="lan" dstip=XXXX dstport=8088 dstintf="XXXX" dstintfrole="undefined" srcinetsvc="Malicious-Malicious.Server" sessionid=2932531463 proto=6 action="deny" policyid=4 policytype="policy" poluuid="XXXXX" service="tcp/8088" dstcountry="Australia" srccountry="Netherlands" trandisp="noop" duration=0 sentbyte=0 rcvdbyte=0 sentpkt=0 appcat="unscanned" crscore=30 craction=131072 crlevel="high" mastersrcmac="XXXXX" srcmac="XXXXX" srcserver=0 When i use btool it looks like the correct props are being applied D:\Program Files\Splunk\bin>splunk btool props list | findstr FWCL01 [host::FWCL01] TRANSFORMS-set_null = FWCL01_ruleid0_to_null, FWCL01_ruleid4_to_null   Any idea's?
Hi, I tried to install Splunk UF on windows server 2008, but appeared error. You can see the error in the screen shot.    Please your support.   Thanks  
Hello, I have a dashboard that has a multi-select dropdown that contains a list of all database names. When the dashboard is first run, the token that would hold the database name if a selection ... See more...
Hello, I have a dashboard that has a multi-select dropdown that contains a list of all database names. When the dashboard is first run, the token that would hold the database name if a selection was made in the dropdown is set to * so all database events are read. Only the top 5 are returned. My query looks like this: index=whatever shard IN ("*")  | chart count as result by shard | sort -result | head 5 So say the display panel shows results for these databases. 229, 290, 112, 273, 242 I want to set the dropdown labelled Shards form token "form.shardToken" to the list of databases returned as well as updating the token shardToken with the same list of databases. Hopefully that all makes sense.     
Dear All, I have a requirement to parse the data correctly. I am getting merged events and wants separate events for the below events. Could someone help me what configuration needs to be changed a... See more...
Dear All, I have a requirement to parse the data correctly. I am getting merged events and wants separate events for the below events. Could someone help me what configuration needs to be changed and how can i learn regex. I need events to break from [22/05/11@08:13:58.246+0200] P-20316642 T-000001....Timeframe, P and T values can be different. Appreciate your help   [22/05/11@08:14:25.252+0200] P-37945744 T-000001 1 AS -- (Procedure: 'olb-stp-monitoring.r' Line:273) DML TRACE ERROR : use of refreshUsrRig , decomissioning ongoing [22/05/11@08:14:03.266+0200] P-29491506 T-000001 1 AS -- (Procedure: 'olb-stp-monitoring.r' Line:273) DML TRACE ERROR : use of refreshUsrRig , decomissioning ongoing [22/05/11@08:13:58.246+0200] P-20316642 T-000001 1 AS -- (Procedure: 'olb-stp-monitoring.r' Line:273) DML TRACE ERROR : use of refreshUsrRig , decomissioning ongoing
while searching with day span, Its working fine with multiple dates but creating issue when searching within day. It adding extra time span in with day like 2:00 AM, 6:00 AM etc.   Below are the ... See more...
while searching with day span, Its working fine with multiple dates but creating issue when searching within day. It adding extra time span in with day like 2:00 AM, 6:00 AM etc.   Below are the code snippet for the row. Have any solution for this?     <row> <panel> <title>API Count by Environment - Success</title> <chart> <search> <query>index="cust-*-wfd-api-gtw-ilb" "/v1/platform/change_indicators" (host="*$env$*") | search sourcetype="nginx:plus:access" | where like(status, "%2%%") |eval env = mvindex(split(host, "-"), 1) | timechart span=$timespan$ count(request) as TotalCount by env</query> <earliest>$timepicker.earliest$</earliest> <latest>$timepicker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.text">Time</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.text">Request Count</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">auto</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">area</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">right</option> <option name="charting.lineWidth">2</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> </row>  
hello From the dropdown list below, I need to update search events with an eval case command     <input type="dropdown" token="debit" searchWhenChanged="true"> <label>Débit</label> ... See more...
hello From the dropdown list below, I need to update search events with an eval case command     <input type="dropdown" token="debit" searchWhenChanged="true"> <label>Débit</label> <choice value="2 Mb/s">2 Mb/s</choice> <choice value="4 Mb/s">4 Mb/s</choice> </input>        So I try something like this but it doesnt works       | eval debit="$debit$" | eval deb=case(debit=="2 Mb/s", site=="TOTO" OR site=="TITI", debit=="4 Mb/s", site=="TUTU" OR site=="TATA", 1==1,site) | table site deb       could you help please?
index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*zvkk*" AND message="*2022-05-09*" |fields message |rex fie... See more...
index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*zvkk*" AND message="*2022-05-09*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})" |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time     In above query  *******message="*2022-05-09*" ************** i would like to set this date search automatically , basically need to set alert for yesterdays date search 
Hi there  I am new to splunk and I am playing with some live data . my problem is that every time my daily limit for indexing exceeds over 500MB. Due to that, sometime I am not able to make some que... See more...
Hi there  I am new to splunk and I am playing with some live data . my problem is that every time my daily limit for indexing exceeds over 500MB. Due to that, sometime I am not able to make some query and every time I need to reinstall  the same which affect my capability and time consuming thing.I want to increase that daily limit so that I can perform on test data which is live...     Thanks in advanced
Is it possible to map one index to another index?
Splunk newbie here! My usecase is to 1. monitor AWS EC2 webserver metrics (how do I push cpu, iostat, other stats to splunk? I tried to install an app/addon. But the dashboards are empty. I need so... See more...
Splunk newbie here! My usecase is to 1. monitor AWS EC2 webserver metrics (how do I push cpu, iostat, other stats to splunk? I tried to install an app/addon. But the dashboards are empty. I need some help building the graphs, populating metrics.  2. integrate splunk with grafana. (I was able to successfuly connect splunk as a datasource but not sure how to build the dashboards in grafana for splunk data).   any advise/recommendations to accomplish this is appreciated. 
Hello, I completed a few UF based data ingestions and SPLUNK is getting events from those ingestions but have some issues with breaking event. I have 2 types of files: 1)   text files with header... See more...
Hello, I completed a few UF based data ingestions and SPLUNK is getting events from those ingestions but have some issues with breaking event. I have 2 types of files: 1)   text files with header and Pile Delimiters, 2) XML files In the case of text files, header info is showing up within the SPLUNK events, and also events are not breaking as expected at all, most of the cases, one SPLUNK event contains more than one source events In the case of XML files, info within one source file considers as one SPLUNK event, but it should be considered number of events based on the XML tag. Any thoughts/recommendations to resolve these issues would be highly appreciated. Thank you! props/input configuration files and source files are given below: For Text Files: props [ds:audit] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) HEADERFIELD_LINE_NUMBER=1 INDEXED_EXTRACTIONS=psv TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%Q%z TIMESTAMP_FIELDS=TimeStamp inputs [monitor:///opt/audit/DS/DS_EVENTS*.txt] sourcetype=ds:audit index=ds_test sample serID|UserType|System|EventType|EventId|Subject|SessionID|SrcAddr|EventStatus|ErrorMsg|TimeStamp|Additional Application Data |Device p22bb4r|TEST|DS|USER| VIEW_NODE |ELEMENT<843006481>|131e9d5b-e84e-567d-a6b1-775f58993f68|null|00||2022-06-14T09:01:55.001+0000||NA p22bbs1|TEST|DS|USER| FULL_SEARCH |ELEMENT<843006481>|121e7d5b-f84e-467d-a6b1-775f58993f68|null|00||2021-06-14T09:01:50.001+0000||NA p22bbw3|TEST|DS|USER| FULL_SEARCH | ELEMENT< 343982854>|5b8fb22e-eeed-4802-8b07-8559dbfe1e45|null|00||2021-06-14T08:54:08.054+0000||NA ts70sbr4|TEST|DS|USER|VIEW_NODE| ELEMENT< 35382854>|5b8fb22e-eeed-4802-8b07-8559dbfe1e45|null|00||2021-06-14T08:54:16.054+0000||NA ts70sbd3|TEST|DS|USER|FULL_SEARCH|ELEMENT<933982854>|5b8fb22e-eeed-4802-8b07-8559dbfe1e45|null|00||2021-06-14T08:53:54.053+0000||NA For XML Files: [secops:audit] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]*)<MODTRANSL> TIME_PREFIX=<TIMESTAMP> TIME_FORMAT=%Y%m%d%H%M%S MAX_TIMESTAMP_LOOKAHEAD=14 TRUNCATE=2500 Input [monitor:///opt/app/secops/logs/audit_secops_log*.XML] sourcetype=secops:audit index=secops_test Sample Data <?xml version="x.1" encoding="UTF-8"?><DSDATA><MODTRANSL><TIMESTAMP>20190621121321</TIMESTAMP><USERID>d23bsrb</USERID><USERTYPE>SECOPS</USERTYPE><SYSTEM>DS</SYSTEM><EVENTTYPE>ADMIN</EVENTTYPE><EVENTID>SYS</EVENTID><ID>0300001</ID><SRCADDR>10.210.135.108</SRCADDR><RETURNCODE>00</RETURNCODE><VARDATA> Initiated New Entity Status: AP</VARDATA></MODTRANSL><MODTRANSL><TIMESTAMP>20190621121416</TIMESTAMP><USERID> d23bsrb </USERID><USERTYPE>SECOPS</USERTYPE><SYSTEM>DSI</SYSTEM><EVENTTYPE>ADMIN</EVENTTYPE><EVENTID>SYS</EVENTID><ID>000000000</ID><SRCADDR>10.210.135.120</SRCADDR><RETURNCODE>00</RETURNCODE><VARDATA> Entity Status: Approved New Entity Status: TI</VARDATA></MODTRANSL><MODTRANSL><TIMESTAMP>20190621121809</TIMESTAMP><USERID>sj45yrs</USERID><USERTYPE>SECOPS</USERTYPE><SYSTEM>DSI</SYSTEM><EVENTTYPE>ADMIN</EVENTTYPE><EVENTID>DS_OPD</EVENTID><ID>2192345</ID><SRCADDR>10.212.25.19</SRCADDR><RETURNCODE>00</RETURNCODE><VARDATA> 43ded7433b314eb58d2307e9bc536bd3</VARDATA > <DURATION>124</DURATION> </MODTRANSL</DSDATA>
HI All, I have a question, How to create index using REST API in a index clustered environment? Version : Splunk Enterprise 8.x