All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Skv . which script? Cache is a normal feature of Splunk Forwarders. Ciao. Giuseppe
Hello @vvkarur , You can try this regex | rex field=_raw  "\"role\":\"(?<field_name>\w+)\"" Thanks!
Hi guys!  I've been struggeling for a while understanding metrics. When making a line chart for both average and max  value the trend is exact the same. This is the query: | mstats avg("% Proce... See more...
Hi guys!  I've been struggeling for a while understanding metrics. When making a line chart for both average and max  value the trend is exact the same. This is the query: | mstats avg("% Processor Time") as Avg, max("% Processor Time") as Max Where index="metric_index" AND collection=CPU AND host="host" span=1m | fields _time, Avg, Max But if I do avg and max of the value in the same time range I get two different values. Query used: | mstats avg("% Processor Time") as Avg, max("% Processor Time") as Max Where index="metric_index" AND "collection"="CPU" AND "host"="host" Earlier I had this data ingested as events and  then I had different trend for avg and max.. The inputs.conf file looks like this (using the Splunk_TA_windows app): ## CPU [perfmon://CPU] counters = % Processor Time disabled = 0 samplingInterval = 2000 stats = average; min; max instances = _Total interval = 60 mode = single object = Processor useEnglishOnly=true formatString = %.2f index = metric_index Are someone able to explain why this happens? Thanks in advance
Could you please share the script how it can used @gcusello 
Unfortunately, no.
Hi @jngo ,   I got exactly the same problem.  Have you found a solution to this situation ?   Thanks, Olivier
We are using http url with setting enableSplunkWebSSL = false in web.conf file. The host where i am trying to access splunk webrowser is a windows machine and the telnet i did is from the splunk ser... See more...
We are using http url with setting enableSplunkWebSSL = false in web.conf file. The host where i am trying to access splunk webrowser is a windows machine and the telnet i did is from the splunk server that is a linux machine which i am trying to access and its not accessible in url. below output from splunk server: sudo iptables -L [sudo] password for acnops_splunk: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:irdmi ACCEPT tcp -- anywhere anywhere tcp dpt:palace-6 ACCEPT tcp -- anywhere anywhere tcp dpt:distinct32 ACCEPT tcp -- anywhere anywhere tcp dpt:8089 ACCEPT tcp -- anywhere anywhere tcp dpt:distinct Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination [acnops_splunk@IEM******** ~]$ sudo firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: eth0 sources: services: dhcpv6-client ssh ports: protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: [acnops_splunk@IEM****** ~]$ looking forward for some solution
Hi @Bluekeeper , sorry but I don't understand your requirement: why do you want to do this? About your question: REST is used only for searching. About credentials, you could try to store them usi... See more...
Hi @Bluekeeper , sorry but I don't understand your requirement: why do you want to do this? About your question: REST is used only for searching. About credentials, you could try to store them using the encryption from Splunk, but I don't understand what you want to do. I can suppose that you whould modify some conf file in the deployment-apps folder of the Deployment Server, in this case the only solution is a script outside the Splunk web gui. Ciao. Giuseppe
What is it you are trying to achieve and why can you not do it using simple drilldowns?
Hello, I tried to configure SplunkForwarder but I got this error message. Ple "Unable to initialize modular input”upload_pcap” defined in the app “splunk_app_stream.” Introspecting scheme=upload_pc... See more...
Hello, I tried to configure SplunkForwarder but I got this error message. Ple "Unable to initialize modular input”upload_pcap” defined in the app “splunk_app_stream.” Introspecting scheme=upload_pcap:script running failed (exited with code 1)"
There are many ways to do this, but using if function is perhaps my last choice.  Try this:   | rex field=index "_(?<app_id>\w+?)_(?<environment>(non_)*prod)"   Here is an emulation for you to pl... See more...
There are many ways to do this, but using if function is perhaps my last choice.  Try this:   | rex field=index "_(?<app_id>\w+?)_(?<environment>(non_)*prod)"   Here is an emulation for you to play with and compare with real data.   | makeresults format=csv data="index sony_app_XXXXXX_non_prod sony_app_XXXXXX_prod sony_app_123456_non_prod sony_app_xyzabc_prod" ``` the above emulates index = sony_* ```   Output from this emulation is app_id environment index app_XXXXXX non_prod sony_app_XXXXXX_non_prod app_XXXXXX prod sony_app_XXXXXX_prod app_123456 non_prod sony_app_123456_non_prod app_xyzabc prod sony_app_xyzabc_prod Hope this helps.
Hi, i want to move a file from a client into Deployment Server via Search Head. I was thinking of something like  | makeresults | eval content="the content of text file that need to be sent over... See more...
Hi, i want to move a file from a client into Deployment Server via Search Head. I was thinking of something like  | makeresults | eval content="the content of text file that need to be sent over to DS." | search [ | rest splunk_server=ds /services/search/jobs search="| outputlookup test.csv" ]   but it seems that the rest command does not support anything except search (it does not work with pipes after search either),  but it is not the same using rest from cli or rest queries from outside Splunk.  Since it would be a challenge to store credentials in an app protected, doing it using script or cli would be my last option. Doing it using the web interface would be better for further development. Thanks
First, when posting type 2 which is in JSON, please use raw text.  Splunk's "syntax highlights" view is non-compliant and very difficult to process. (See the crazy rex in my emulation below; you also... See more...
First, when posting type 2 which is in JSON, please use raw text.  Splunk's "syntax highlights" view is non-compliant and very difficult to process. (See the crazy rex in my emulation below; you also introduced additional syntax errors when attempting to simplify or anonymize.)  Also in type 2, you should preserve the uuid's value as that's the only key that distinguishes between the two.  For everyone's benefit, I'm posting reconstructed raw events from type 2:   { "@message": { "attributeContract": { "extendedAttributes": [ ], "maskOgnlValues": false, "uniqueUserKeyAttribute": "uuid" }, "attributeMapping": { "attributeContractFulfillment": { "uuid": { "source": { "type": "ADAPTER" }, "value": "9c5b94b1-35ad-49bb-b118-8e8fc24abf80" } }, "attributeSources": [ ], "issuanceCriteria": { "conditionalCriteria": [ ] } }, "configuration": { "fields": [ { "name": "Application ObjectClass", "value": "cartmanUser" }, { "name": "Application Entitlement Attribute", "value": "cartmanRole" }, { "name": "IAL to Enforce", "value": 2 } ], "id": "Cartman", "name": "Cartman" } }, "@timestamp": "2025-01-01T00:00:01.833685" } { "@message": { "attributeContract": { "extendedAttributes": [ ], "maskOgnlValues": false, "uniqueUserKeyAttribute": "uuid" }, "attributeMapping": { "attributeContractFulfillment": { "uuid": { "source": { "type": "ADAPTER" }, "value": "550e8400-e29b-41d4-a716-446655440000" } }, "attributeSources": [ ], "issuanceCriteria": { "conditionalCriteria": [ ] } }, "configuration": { "fields": [ { "name": "Application ObjectClass", "value": "cartmanUser" }, { "name": "Application Entitlement Attribute", "value": "cartmanRole" }, { "name": "IAL to Enforce", "value": 1 } ], "id": "Cartman", "name": "Cartman" } }, "@timestamp": "2025-01-02T00:00:01.833685" }   Like @bowesmana, I fail to see see the relevance of type 1.  Type 2 is all you need to produce the results you want.  I also don't see why you want to print two tables rather than printing one table with two rows (differentiated by UUID).  So, this is what I'm going to show. Actual code is pretty simple.  My main time was sunken in reconstruct valid JSON data from your pasted text.   | fields @message.attributeMapping.attributeContractFulfillment.uuid.value ``` ^^^ this line is just to declutter output ``` | spath path=@message.configuration.fields{} | eval restructured_fields = json_object() | foreach @message.configuration.fields{} mode=multivalue [eval restructured_fields = json_set(restructured_fields, json_extract(<<ITEM>>, "name"), json_extract(<<ITEM>>, "value"))] | spath input=restructured_fields   (This foreach syntax above requires Splunk 9.0.)  Output from the two reconstructed events is as follows: @message.attributeMapping.attributeContractFulfillment.uuid.value Application Entitlement Attribute Application ObjectClass IAL to Enforce 9c5b94b1-35ad-49bb-b118-8e8fc24abf80 cartmanRole cartmanUser 2 550e8400-e29b-41d4-a716-446655440000 cartmanRole cartmanUser 1 Does this satisfy your requirements? It is useful to print out the two intermediate JSON objects used in this search so you can clearly see dataflow: @message.configuration.fields{} restructured_fields { "name": "Application ObjectClass", "value": "cartmanUser" } { "name": "Application Entitlement Attribute", "value": "cartmanRole" } { "name": "IAL to Enforce", "value": 2 } {"Application ObjectClass":"cartmanUser","Application Entitlement Attribute":"cartmanRole","IAL to Enforce":2} { "name": "Application ObjectClass", "value": "cartmanUser" } { "name": "Application Entitlement Attribute", "value": "cartmanRole" } { "name": "IAL to Enforce", "value": 1 } {"Application ObjectClass":"cartmanUser","Application Entitlement Attribute":"cartmanRole","IAL to Enforce":1} @message.configuration.fields{}, of source, is extracted directly from raw data. Here is an emulation for you to play with and compare with real data type 2:   | makeresults | fields - _time | eval sourcetype = "type2", data = mvappend("{ [-] @message: { [-] attributeContract: { [-] extendedAttributes: [ [-] ] maskOgnlValues: false uniqueUserKeyAttribute: uuid } attributeMapping: { [-] attributeContractFulfillment: { [-] uuid: { [-] source: { [-] type: ADAPTER } value: 9c5b94b1-35ad-49bb-b118-8e8fc24abf80 } } attributeSources: [ [-] ] issuanceCriteria: { [-] conditionalCriteria: [ [-] ] } } configuration: { [-] fields: [ [-] { [-] name: Application ObjectClass value: cartmanUser } { [-] name: Application Entitlement Attribute value: cartmanRole } { [-] name: IAL to Enforce value: 2 } ] id: Cartman name: Cartman } } @timestamp: 2025-01-01T00:00:01.833685 }", "{ [-] @message: { [-] attributeContract: { [-] extendedAttributes: [ [-] ] maskOgnlValues: false uniqueUserKeyAttribute: uuid } attributeMapping: { [-] attributeContractFulfillment: { [-] uuid: { [-] source: { [-] type: ADAPTER } value: 550e8400-e29b-41d4-a716-446655440000 } } attributeSources: [ [-] ] issuanceCriteria: { [-] conditionalCriteria: [ [-] ] } } configuration: { [-] fields: [ [-] { [-] name: Application ObjectClass value: cartmanUser } { [-] name: Application Entitlement Attribute value: cartmanRole } { [-] name: IAL to Enforce value: 1 } ] id: Cartman name: Cartman } } @timestamp: 2025-01-02T00:00:01.833685 }") | rex field=data mode=sed "s/\[-]//g s/\n+([\w@])/\n\"\1/g s/([^\"]): (true|false|\d+\n)/\1\": \2/g s/([^\"]):(\W+\n)/\1\":\2/g s/([^\"]): (.+)/\1\": \"\2\"/g s/([\w\"}\]])\n([\"{\[])/\1,\n\2/g" | mvexpand data | rename data AS _raw | spath ``` data type 2 emulation above ``` (Can you see how crazy that rex command is?)   For completeness, this is how you extract data from type 1 in case it is of use to you:   | eval message = replace(message, "'", "") | spath input=message   message field should have been present at search type.  The result from your sample data is UserAccessSubmission.csp UserAccessSubmission.mail UserAccessSubmission.objectClass UserAccessSubmission.trackingId UserAccessSubmission.uuid sourcetype trackingid Butters sean@southpark.net cartmanUser tid:13256464 abc123 type1 tid:13256464 Butters sean@southpark.net cartmanUser tid:13256464 abc123 type1 tid:13256464 Butters sean@southpark.net cartmanUser tid:13256464 abc123 type1 tid:13256464 Butters sean@southpark.net StanUser tid:13256464 abc123 type1 tid:13256464 Butters sean@southpark.net StanUser tid:13256464 abc123 type1 tid:13256464 This is emulation of data type 1 used to extract the above.   | makeresults | fields - _time | eval sourcetype = "type1", data = split("2025-01-01 00:00:00,125 trackingid=\"tid:13256464\"message='{\"UserAccessSubmission\":{\"uuid\":\"abc123\",\"mail\":\"sean@southpark.net\",\"trackingId\":\"tid:13256464\",\"objectClass\":\"cartmanUser\",\"csp\":\"Butters\"}}' 2025-01-01 00:01:00,125 trackingid=\"tid:13256464\"message='{\"UserAccessSubmission\":{\"uuid\":\"abc123\",\"mail\":\"sean@southpark.net\",\"trackingId\":\"tid:13256464\",\"objectClass\":\"cartmanUser\",\"csp\":\"Butters\"}}' 2025-01-02 00:01:00,125 trackingid=\"tid:13256464\"message='{\"UserAccessSubmission\":{\"uuid\":\"abc123\",\"mail\":\"sean@southpark.net\",\"trackingId\":\"tid:13256464\",\"objectClass\":\"cartmanUser\",\"csp\":\"Butters\"}}' 2025-01-02 00:01:00,125 trackingid=\"tid:13256464\"message='{\"UserAccessSubmission\":{\"uuid\":\"abc123\",\"mail\":\"sean@southpark.net\",\"trackingId\":\"tid:13256464\",\"objectClass\":\"StanUser\",\"csp\":\"Butters\"}}' 2025-01-02 00:01:00,125 trackingid=\"tid:13256464\"message='{\"UserAccessSubmission\":{\"uuid\":\"abc123\",\"mail\":\"sean@southpark.net\",\"trackingId\":\"tid:13256464\",\"objectClass\":\"StanUser\",\"csp\":\"Butters\"}}'", " ") | mvexpand data | rename data AS _raw | extract ``` data type 1 emulation above ```    
Hi @dmoberg , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poin... See more...
Hi @dmoberg , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I was able to solve it.   By populating the dropdowns in the Dashboard from an Inputlookup (with data from a scheduled search), it started working to use the approach I detailed below (setting the ... See more...
I was able to solve it.   By populating the dropdowns in the Dashboard from an Inputlookup (with data from a scheduled search), it started working to use the approach I detailed below (setting the FirstLoad Token). Not sure exactly why this made it work but at least it works.
You also need to give permission to the splunk user to write to the volumes you are writing to on the indexer, i.e. if you have a /hot and /cold volume on the indexer, the splunk user needs to have o... See more...
You also need to give permission to the splunk user to write to the volumes you are writing to on the indexer, i.e. if you have a /hot and /cold volume on the indexer, the splunk user needs to have ownership and permissions to write to these volumes.
Hi @Nrosemommy , could you better describe your infrastructure? have you a Forwarder? did you configured it to send logs to Splunk? did you confgured Splunk to receive logs? did you used an add-... See more...
Hi @Nrosemommy , could you better describe your infrastructure? have you a Forwarder? did you configured it to send logs to Splunk? did you confgured Splunk to receive logs? did you used an add-on? which logs do you want to send? how did you checked the situation you're reporting? Ciao. Giuseppe
@usukhbayar_g Check Point Firewalls do generate logs, but they primarily focus on security events and traffic data rather than detailed system information like CPU, memory usage, or hardware sensors.... See more...
@usukhbayar_g Check Point Firewalls do generate logs, but they primarily focus on security events and traffic data rather than detailed system information like CPU, memory usage, or hardware sensors. You should check if your firewall is logging these system details. Please check this document how to enable those logs to be monitored.  https://community.checkpoint.com/t5/Security-Gateways/Monitoring-RAM-and-CPU-usage/td-p/144877 
As @ITWhisperer points out, neither substring or regex is the correct tool to extract information from structured data such as JSON.  I assume that that so-called "string" is not the entire event bec... See more...
As @ITWhisperer points out, neither substring or regex is the correct tool to extract information from structured data such as JSON.  I assume that that so-called "string" is not the entire event because otherwise Splunk would have automatically extracted role at search time.  Suppose you have an event like this (I'm quite convinced that you missed a comma between '00"' and '"name'.)   stuff before ... {"code":"1234","bday":"15-02-06T07:02:01.731+00:00", "name":"Alex", "role":"student","age":"16"} stuff after ...   What you do is to use regex to extract the part that is compliant JSON (not a portion of it), then use spath or fromjson to extract all key-value pairs. The following should usually work:   | rex "^[^{]*(?<json_message>{.+})" | spath input=json_message   That sample data will return age bday code json_message name role 16 15-02-06T07:02:01.731+00:00 1234 {"code":"1234","bday":"15-02-06T07:02:01.731+00:00", "name":"Alex", "role":"student","age":"16"} Alex student Here is an emulation for you to play with and compare with real data   | makeresults | eval _raw = "stuff before ... {\"code\":\"1234\",\"bday\":\"15-02-06T07:02:01.731+00:00\", \"name\":\"Alex\", \"role\":\"student\",\"age\":\"16\"} stuff after ..." ``` data emulation above ```  
Hi @SN1  telent command need to run on forwader as mentioned by @gcusello and also hope you followed stpes menioned by @livehybrid