All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @LizAndy123 , is there a common field to use for grouping, e.h. host or transaction_id? if yes, use it in this way: <your_search> | stats values(totalCapacity) AS totalCapacity valu... See more...
Hi @LizAndy123 , is there a common field to use for grouping, e.h. host or transaction_id? if yes, use it in this way: <your_search> | stats values(totalCapacity) AS totalCapacity values(usedCapacity ) AS usedCapacity BY common_key | eval CapLeft = totalCapacity - usedCapacity if for the same common_key you can gave more values, use max or min or avg instead of values as function. Ciao. giuseppe
So I have in the past used a report which finds a string and then calculates the size left and it came as 1 whole event so was simple. Now it is coming as 2 events - how do I perform this on the 2 e... See more...
So I have in the past used a report which finds a string and then calculates the size left and it came as 1 whole event so was simple. Now it is coming as 2 events - how do I perform this on the 2 events   1st event  - replies with totalCapacity=12323455667 2nd event - replies with usedCapacity=233445 I need to take away the used from the total and report - and this was possible before as it came as just 1 event and I did an eval CapLeft = totalCapacity - usedCapacity and it worked because everything was in the same event. 1 event contained totalCapacity and userCapacity in the same output
You most likely didn't install the full glibc package or needed libraries with those 2 commands We used it for something else(Not AppDynamics) on alpine before and worked great  https://github.com/... See more...
You most likely didn't install the full glibc package or needed libraries with those 2 commands We used it for something else(Not AppDynamics) on alpine before and worked great  https://github.com/sgerrand/alpine-pkg-glibc   i will try the solution as well shortly from my side but should resolve the issue with 
@livehybrid, both suggestions look great but I am trying to have a field in a table dynamically change based on the result. So one filed in multiple columns will have different colors based on the re... See more...
@livehybrid, both suggestions look great but I am trying to have a field in a table dynamically change based on the result. So one filed in multiple columns will have different colors based on the results in that field.  The idea is to call out the results in red that are >=(+-15). 
Hi @livehybrid  So to days later I see this It says theres data being recorded in the KPI but simultaneously there is no data.
Hi @livehybrid, I appreciate your comment! That's not exactly what I'm looking for. I don't want to use the Table panel, but instead I want to change the Event panel's display to table. I apologize ... See more...
Hi @livehybrid, I appreciate your comment! That's not exactly what I'm looking for. I don't want to use the Table panel, but instead I want to change the Event panel's display to table. I apologize if what I'm saying is unclear. The best example I can give is the same thing I'm aiming for in the Classic Dashboard: <panel> <event> <search> <query>index=*</query> <earliest>$global_time.earliest$</earliest> <latest>$global_time.latest$</latest> </search> <fields>field, field2, field3</fields> <option name="type">table</option> </event> </panel>  
The short answer is - no. The long answer is - at each stage of bucket's lifecycle (hot/warm/cold) it's limited by different set of parameters. See https://conf.splunk.com/files/2017/slides/splunk-d... See more...
The short answer is - no. The long answer is - at each stage of bucket's lifecycle (hot/warm/cold) it's limited by different set of parameters. See https://conf.splunk.com/files/2017/slides/splunk-data-life-cycle-determining-when-and-where-to-roll-data.pdf Addifional size constraints can be added on a per-volume level.
Hello, I would like to know if it possible to define the retention period for each type of log (Hot/Warm/Cold). For example, setting the total frozenTimePeriodInSecs to 3 years while specifying a 1 ... See more...
Hello, I would like to know if it possible to define the retention period for each type of log (Hot/Warm/Cold). For example, setting the total frozenTimePeriodInSecs to 3 years while specifying a 1 year retention period for each stage (Hot,Warm and Cold). Could you please clarify this?
You can use regex to filter events | regex hostname="AB(100|110|130)\d{4}TILL\d+$(?<!(100|101|102|150|151))"
Thank you @livehybrid  I added the time filed as per your recommendation and it worked!! "time": event["timestamp"],   On my search _time and timestamp have the same value even the milliseconds. ... See more...
Thank you @livehybrid  I added the time filed as per your recommendation and it worked!! "time": event["timestamp"],   On my search _time and timestamp have the same value even the milliseconds. Thanks again.
Thanks for the reply @alec_stan  The endpoint you are using is an alias to the event endpoint, if you're sending data to that endpoint then you need to make sure your data is in the following format... See more...
Thanks for the reply @alec_stan  The endpoint you are using is an alias to the event endpoint, if you're sending data to that endpoint then you need to make sure your data is in the following format: { "time": 1426279439, // epoch time "host": "localhost", "source": "random-data-generator", "sourcetype": "my_sample_data", "index": "main", "event": "Hello world!" } If you have the epoch timestamp value in the time field in there then it should work without any extra parsing required. If you're writing the script yourself then it might be easier to write it in the format required for HEC with the timestamp, rather than updating the Splunk props. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
If a DHCP is assigning IP address to your Splunk server, do not go for this.  You will also need to add the following line to /etc/system/local/web.conf  [settings] mgmtHostPort = 10.10.x.x:8089 ... See more...
If a DHCP is assigning IP address to your Splunk server, do not go for this.  You will also need to add the following line to /etc/system/local/web.conf  [settings] mgmtHostPort = 10.10.x.x:8089 _______________________________________________________________ Otherwise, splunk default stanza (below) will look for splunkd at its loopback address.  /opt/splunk/etc/system/default/web.conf mgmtHostPort = 127.0.0.1:8089
Thanks for prompt response. Yes, the props.conf is on the intermediate forwarder. Below is how we send the the data to the forwarder: response = requests.post(f”{splunk_url}/services/collector”, h... See more...
Thanks for prompt response. Yes, the props.conf is on the intermediate forwarder. Below is how we send the the data to the forwarder: response = requests.post(f”{splunk_url}/services/collector”, headers=headers, data=json.dumps(data), timeout=10) We did not specify either /raw or /event. I will proceed to edit the script and send explicitly to the raw endpoint.  
Hi @okumar1  I see you have put a screenshot of the rule that allows inbound traffic to your server, but is your UF server also configured with outbound connectivity on port 9997? If you are using ... See more...
Hi @okumar1  I see you have put a screenshot of the rule that allows inbound traffic to your server, but is your UF server also configured with outbound connectivity on port 9997? If you are using linux with netcat installed then this might work well to test: nc -vz -w1 yourServerIP 9997 you can also check with portquiz.net nc -vz -w1 portquiz.net 9997 The portquiz.net test relies on you being able to reach the internet from your server. Let us know how you get on and we can investigate further. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Further to my previous post, typically it is said that you cannot change the _time value of HEC data sent to the event endpoint (see https://community.splunk.com/t5/Getting-Data-In/JSON-timestamps-no... See more...
Further to my previous post, typically it is said that you cannot change the _time value of HEC data sent to the event endpoint (see https://community.splunk.com/t5/Getting-Data-In/JSON-timestamps-not-parsed-via-HTTP-Event-Collector/td-p/204689 for example) However, in >8.0 you can add ?auto_extract_timestamp=true to your event endpoint (if the source system allows...) and then the data will go through the timestamp parsing process. Additionally, I think it should be possible to use INGEST_EVAL to overwrite your _time if the above doesnt work, as INGEST_EVAL is still run against parsed HEC data ==props.conf== [yourSourceType] TRANSFORMS-determineTime = correctedHECTime ==transforms.conf== [correctedHECTime] INGEST_EVAL = _time:=strptime(timestamp, "%s.%6N") Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @okumar1 , at first, did you checked if the connection (firewall route) is open between UF and IDX on the 9997 port? you can check this on UF using telnet. Then, is the 9997 port open on the ID... See more...
Hi @okumar1 , at first, did you checked if the connection (firewall route) is open between UF and IDX on the 9997 port? you can check this on UF using telnet. Then, is the 9997 port open on the IDX local firewall? Ciao. Giuseppe
Hi @Chakri  I think the example I gave you should be able to specify any particular country code / store number in the first bit (hostname=AB123*)  The second part then removes the TILLS from that ... See more...
Hi @Chakri  I think the example I gave you should be able to specify any particular country code / store number in the first bit (hostname=AB123*)  The second part then removes the TILLS from that store which you are not interested in. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
  Hi @livehybrid  I forgot to mention one more detail, we have 3 country codes like 100, 110,130.   and the hostnames will be like this,   AB1001234TILL1 AB1101234TILL1 AB1301234TILL1   So ... See more...
  Hi @livehybrid  I forgot to mention one more detail, we have 3 country codes like 100, 110,130.   and the hostnames will be like this,   AB1001234TILL1 AB1101234TILL1 AB1301234TILL1   So I have to differentiate, it based on store, country and Till.
Hi @alec_stan  Please could you confirm, are these props on your Intermediary forwarder? From the naming of the host I am assuming so, but wanted to check. More importantly, can you confirm which H... See more...
Hi @alec_stan  Please could you confirm, are these props on your Intermediary forwarder? From the naming of the host I am assuming so, but wanted to check. More importantly, can you confirm which HEC endpoint the servers are sending to, ie is it raw or events? Raw endpoint: services/collector/raw Event endpoint: services/collector/event If you are sending to the event endpoint then your props to extract the time will have no impact (for more info see https://docs.splunk.com/Documentation/Splunk/9.4.1/RESTREF/RESTinput#services.2Fcollector.2Fraw:~:text=true%2C%221%22%3Atrue%7D%7D-,services/collector/event,-Sends%20timestamped%20events Also check out https://community.splunk.com/t5/Getting-Data-In/HEC-How-to-set-time-on-base-of-a-specific-JSON-field/m-p/515486 which seems to be along a similar topic - although I will be able to answer more accurately if you could please confirm the above questions Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
We have a discrepancy of 30 to 40 seconds between the event timestamp and _time. I have tries changing the config on props.conf without any luck. Our setup is such that the search head is on cloud wh... See more...
We have a discrepancy of 30 to 40 seconds between the event timestamp and _time. I have tries changing the config on props.conf without any luck. Our setup is such that the search head is on cloud while all the forwarders are on premise. The events are collected using psutil on linux servers and sent to the IF through the HEC. The props.conf is as follows:  [infra:script:uptime] SHOULD_LINEMERGE = false KV_MODE = json INDEXED_EXTRACTIONS=JSON TIMESTAMP_FIELDS=timestamp TIME_PREFIX = "timestamp":\s TIME_FORMAT = %s.%6N MAX_TIMESTAMP_LOOKAHEAD = 100 DATETIME_CONFIG = NONE TRUNCATE = 0 TZ=Africa/Gaborone   btool produces the following output: [splunkusr@uatbwsif001v bin]$ ./splunk cmd btool props list "infra:script:uptime" --debug /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf [infra:script:uptime] /opt/splunk/etc/system/default/props.conf ADD_EXTRA_TIME_FIELDS = True /opt/splunk/etc/system/default/props.conf ANNOTATE_PUNCT = True /opt/splunk/etc/system/default/props.conf AUTO_KV_JSON = true /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE = /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE_DATE = True /opt/splunk/etc/system/default/props.conf CHARSET = UTF-8 /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf DATETIME_CONFIG = NONE /opt/splunk/etc/system/default/props.conf DEPTH_LIMIT = 1000 /opt/splunk/etc/system/default/props.conf DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false /opt/splunk/etc/system/default/props.conf HEADER_MODE = /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf INDEXED_EXTRACTIONS = JSON /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf KV_MODE = json /opt/splunk/etc/system/default/props.conf LB_CHUNK_BREAKER_TRUNCATE = 2000000 /opt/splunk/etc/system/default/props.conf LEARN_MODEL = true /opt/splunk/etc/system/default/props.conf LEARN_SOURCETYPE = true /opt/splunk/etc/system/default/props.conf LINE_BREAKER_LOOKBEHIND = 100 /opt/splunk/etc/system/default/props.conf MATCH_LIMIT = 100000 /opt/splunk/etc/system/default/props.conf MAX_DAYS_AGO = 2000 /opt/splunk/etc/system/default/props.conf MAX_DAYS_HENCE = 2 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_AGO = 3600 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_HENCE = 604800 /opt/splunk/etc/system/default/props.conf MAX_EVENTS = 256 /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf MAX_TIMESTAMP_LOOKAHEAD = 100 /opt/splunk/etc/system/default/props.conf MUST_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_BEFORE = /opt/splunk/etc/system/default/props.conf SEGMENTATION = indexing /opt/splunk/etc/system/default/props.conf SEGMENTATION-all = full /opt/splunk/etc/system/default/props.conf SEGMENTATION-inner = inner /opt/splunk/etc/system/default/props.conf SEGMENTATION-outer = outer /opt/splunk/etc/system/default/props.conf SEGMENTATION-raw = none /opt/splunk/etc/system/default/props.conf SEGMENTATION-standard = standard /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf SHOULD_LINEMERGE = false /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf TIMESTAMP_FIELDS = timestamp /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf TIME_FORMAT = %s.%6N /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf TIME_PREFIX = "timestamp":\s /opt/splunk/etc/system/default/props.conf TRANSFORMS = /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf TRUNCATE = 0 /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf TZ = Africa/Gaborone /opt/splunk/etc/system/default/props.conf detect_trailing_nulls = false /opt/splunk/etc/system/default/props.conf maxDist = 100 /opt/splunk/etc/system/default/props.conf priority = /opt/splunk/etc/system/default/props.conf sourcetype = /opt/splunk/etc/system/default/props.conf termFrequencyWeightedDist = false /opt/splunk/etc/system/default/props.conf unarchive_cmd_start_mode = shell Below is a sample raw event on Splunk cloud: {"hostname": "uatbwmca02v.bw.sbicdirectory.com", "timestamp": 1741857668.0344827, "uptime_days": 183, "uptime_hours": 20, "uptime_minutes": 2, "uptime_total_seconds": 15883370} I have attached a screenshot of the following search: index=uat_uptime | eval correct_time=strptime(timestamp, "%s.%6N") | convert ctime(correct_time) ctime(timestamp) | table _time, correct_time, timestamp | sort -_time From the results, it is clear that there is a difference of 30-40 seconds between _time and timestamp field on the event. Another anomaly is that _time is behind the timestamp. I need help forcing _time to be set to the value of timestamp on the event.