All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@solg It looks like there is nothing publicly available. We had to reach out to Proofpoint for the py script to get TRAP data in. It sounds like a question for ProofPoint.  You can download the APP ... See more...
@solg It looks like there is nothing publicly available. We had to reach out to Proofpoint for the py script to get TRAP data in. It sounds like a question for ProofPoint.  You can download the APP and related TA's here: App: https://splunkbase.splunk.com/app/3727/#/details Gateway TA: https://splunkbase.splunk.com/app/3080/ TAP TA: https://splunkbase.splunk.com/app/3681/  
TRAP Cloud has an API to export information, but there is no Add-On to integrate TRAP Cloud with Splunk Has anyone made this integration succesfully? Is there intention to implement a supported Add... See more...
TRAP Cloud has an API to export information, but there is no Add-On to integrate TRAP Cloud with Splunk Has anyone made this integration succesfully? Is there intention to implement a supported Add-On on Splunk to integrate TRAP Cloud?
We succesfully configured FortiWeb SaaS -> Splunk SSL syslog via inputs.conf [tcp-ssl:6514] index = <index> sourcetype = fwbcld_log disabled = 0 [SSL] requireClientCert = false
Thanks @bowesmana  , it worked for me.  Another question,  Is it possible to fetch only the latest record with latest END_TIME when we have multiple records with different END_TIME.  Currently, if... See more...
Thanks @bowesmana  , it worked for me.  Another question,  Is it possible to fetch only the latest record with latest END_TIME when we have multiple records with different END_TIME.  Currently, if there are 2 records with the different END_TIME for the same JOBNAME, we have 2 records.  Is it possible to display only 1 record per jobname with the latest END_TIME ?? 
Hi @Skv . which script? Cache is a normal feature of Splunk Forwarders. Ciao. Giuseppe
Hello @vvkarur , You can try this regex | rex field=_raw  "\"role\":\"(?<field_name>\w+)\"" Thanks!
Hi guys!  I've been struggeling for a while understanding metrics. When making a line chart for both average and max  value the trend is exact the same. This is the query: | mstats avg("% Proce... See more...
Hi guys!  I've been struggeling for a while understanding metrics. When making a line chart for both average and max  value the trend is exact the same. This is the query: | mstats avg("% Processor Time") as Avg, max("% Processor Time") as Max Where index="metric_index" AND collection=CPU AND host="host" span=1m | fields _time, Avg, Max But if I do avg and max of the value in the same time range I get two different values. Query used: | mstats avg("% Processor Time") as Avg, max("% Processor Time") as Max Where index="metric_index" AND "collection"="CPU" AND "host"="host" Earlier I had this data ingested as events and  then I had different trend for avg and max.. The inputs.conf file looks like this (using the Splunk_TA_windows app): ## CPU [perfmon://CPU] counters = % Processor Time disabled = 0 samplingInterval = 2000 stats = average; min; max instances = _Total interval = 60 mode = single object = Processor useEnglishOnly=true formatString = %.2f index = metric_index Are someone able to explain why this happens? Thanks in advance
Could you please share the script how it can used @gcusello 
Unfortunately, no.
Hi @jngo ,   I got exactly the same problem.  Have you found a solution to this situation ?   Thanks, Olivier
We are using http url with setting enableSplunkWebSSL = false in web.conf file. The host where i am trying to access splunk webrowser is a windows machine and the telnet i did is from the splunk ser... See more...
We are using http url with setting enableSplunkWebSSL = false in web.conf file. The host where i am trying to access splunk webrowser is a windows machine and the telnet i did is from the splunk server that is a linux machine which i am trying to access and its not accessible in url. below output from splunk server: sudo iptables -L [sudo] password for acnops_splunk: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:irdmi ACCEPT tcp -- anywhere anywhere tcp dpt:palace-6 ACCEPT tcp -- anywhere anywhere tcp dpt:distinct32 ACCEPT tcp -- anywhere anywhere tcp dpt:8089 ACCEPT tcp -- anywhere anywhere tcp dpt:distinct Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination [acnops_splunk@IEM******** ~]$ sudo firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: eth0 sources: services: dhcpv6-client ssh ports: protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: [acnops_splunk@IEM****** ~]$ looking forward for some solution
Hi @Bluekeeper , sorry but I don't understand your requirement: why do you want to do this? About your question: REST is used only for searching. About credentials, you could try to store them usi... See more...
Hi @Bluekeeper , sorry but I don't understand your requirement: why do you want to do this? About your question: REST is used only for searching. About credentials, you could try to store them using the encryption from Splunk, but I don't understand what you want to do. I can suppose that you whould modify some conf file in the deployment-apps folder of the Deployment Server, in this case the only solution is a script outside the Splunk web gui. Ciao. Giuseppe
What is it you are trying to achieve and why can you not do it using simple drilldowns?
Hello, I tried to configure SplunkForwarder but I got this error message. Ple "Unable to initialize modular input”upload_pcap” defined in the app “splunk_app_stream.” Introspecting scheme=upload_pc... See more...
Hello, I tried to configure SplunkForwarder but I got this error message. Ple "Unable to initialize modular input”upload_pcap” defined in the app “splunk_app_stream.” Introspecting scheme=upload_pcap:script running failed (exited with code 1)"
There are many ways to do this, but using if function is perhaps my last choice.  Try this:   | rex field=index "_(?<app_id>\w+?)_(?<environment>(non_)*prod)"   Here is an emulation for you to pl... See more...
There are many ways to do this, but using if function is perhaps my last choice.  Try this:   | rex field=index "_(?<app_id>\w+?)_(?<environment>(non_)*prod)"   Here is an emulation for you to play with and compare with real data.   | makeresults format=csv data="index sony_app_XXXXXX_non_prod sony_app_XXXXXX_prod sony_app_123456_non_prod sony_app_xyzabc_prod" ``` the above emulates index = sony_* ```   Output from this emulation is app_id environment index app_XXXXXX non_prod sony_app_XXXXXX_non_prod app_XXXXXX prod sony_app_XXXXXX_prod app_123456 non_prod sony_app_123456_non_prod app_xyzabc prod sony_app_xyzabc_prod Hope this helps.
Hi, i want to move a file from a client into Deployment Server via Search Head. I was thinking of something like  | makeresults | eval content="the content of text file that need to be sent over... See more...
Hi, i want to move a file from a client into Deployment Server via Search Head. I was thinking of something like  | makeresults | eval content="the content of text file that need to be sent over to DS." | search [ | rest splunk_server=ds /services/search/jobs search="| outputlookup test.csv" ]   but it seems that the rest command does not support anything except search (it does not work with pipes after search either),  but it is not the same using rest from cli or rest queries from outside Splunk.  Since it would be a challenge to store credentials in an app protected, doing it using script or cli would be my last option. Doing it using the web interface would be better for further development. Thanks
First, when posting type 2 which is in JSON, please use raw text.  Splunk's "syntax highlights" view is non-compliant and very difficult to process. (See the crazy rex in my emulation below; you also... See more...
First, when posting type 2 which is in JSON, please use raw text.  Splunk's "syntax highlights" view is non-compliant and very difficult to process. (See the crazy rex in my emulation below; you also introduced additional syntax errors when attempting to simplify or anonymize.)  Also in type 2, you should preserve the uuid's value as that's the only key that distinguishes between the two.  For everyone's benefit, I'm posting reconstructed raw events from type 2:   { "@message": { "attributeContract": { "extendedAttributes": [ ], "maskOgnlValues": false, "uniqueUserKeyAttribute": "uuid" }, "attributeMapping": { "attributeContractFulfillment": { "uuid": { "source": { "type": "ADAPTER" }, "value": "9c5b94b1-35ad-49bb-b118-8e8fc24abf80" } }, "attributeSources": [ ], "issuanceCriteria": { "conditionalCriteria": [ ] } }, "configuration": { "fields": [ { "name": "Application ObjectClass", "value": "cartmanUser" }, { "name": "Application Entitlement Attribute", "value": "cartmanRole" }, { "name": "IAL to Enforce", "value": 2 } ], "id": "Cartman", "name": "Cartman" } }, "@timestamp": "2025-01-01T00:00:01.833685" } { "@message": { "attributeContract": { "extendedAttributes": [ ], "maskOgnlValues": false, "uniqueUserKeyAttribute": "uuid" }, "attributeMapping": { "attributeContractFulfillment": { "uuid": { "source": { "type": "ADAPTER" }, "value": "550e8400-e29b-41d4-a716-446655440000" } }, "attributeSources": [ ], "issuanceCriteria": { "conditionalCriteria": [ ] } }, "configuration": { "fields": [ { "name": "Application ObjectClass", "value": "cartmanUser" }, { "name": "Application Entitlement Attribute", "value": "cartmanRole" }, { "name": "IAL to Enforce", "value": 1 } ], "id": "Cartman", "name": "Cartman" } }, "@timestamp": "2025-01-02T00:00:01.833685" }   Like @bowesmana, I fail to see see the relevance of type 1.  Type 2 is all you need to produce the results you want.  I also don't see why you want to print two tables rather than printing one table with two rows (differentiated by UUID).  So, this is what I'm going to show. Actual code is pretty simple.  My main time was sunken in reconstruct valid JSON data from your pasted text.   | fields @message.attributeMapping.attributeContractFulfillment.uuid.value ``` ^^^ this line is just to declutter output ``` | spath path=@message.configuration.fields{} | eval restructured_fields = json_object() | foreach @message.configuration.fields{} mode=multivalue [eval restructured_fields = json_set(restructured_fields, json_extract(<<ITEM>>, "name"), json_extract(<<ITEM>>, "value"))] | spath input=restructured_fields   (This foreach syntax above requires Splunk 9.0.)  Output from the two reconstructed events is as follows: @message.attributeMapping.attributeContractFulfillment.uuid.value Application Entitlement Attribute Application ObjectClass IAL to Enforce 9c5b94b1-35ad-49bb-b118-8e8fc24abf80 cartmanRole cartmanUser 2 550e8400-e29b-41d4-a716-446655440000 cartmanRole cartmanUser 1 Does this satisfy your requirements? It is useful to print out the two intermediate JSON objects used in this search so you can clearly see dataflow: @message.configuration.fields{} restructured_fields { "name": "Application ObjectClass", "value": "cartmanUser" } { "name": "Application Entitlement Attribute", "value": "cartmanRole" } { "name": "IAL to Enforce", "value": 2 } {"Application ObjectClass":"cartmanUser","Application Entitlement Attribute":"cartmanRole","IAL to Enforce":2} { "name": "Application ObjectClass", "value": "cartmanUser" } { "name": "Application Entitlement Attribute", "value": "cartmanRole" } { "name": "IAL to Enforce", "value": 1 } {"Application ObjectClass":"cartmanUser","Application Entitlement Attribute":"cartmanRole","IAL to Enforce":1} @message.configuration.fields{}, of source, is extracted directly from raw data. Here is an emulation for you to play with and compare with real data type 2:   | makeresults | fields - _time | eval sourcetype = "type2", data = mvappend("{ [-] @message: { [-] attributeContract: { [-] extendedAttributes: [ [-] ] maskOgnlValues: false uniqueUserKeyAttribute: uuid } attributeMapping: { [-] attributeContractFulfillment: { [-] uuid: { [-] source: { [-] type: ADAPTER } value: 9c5b94b1-35ad-49bb-b118-8e8fc24abf80 } } attributeSources: [ [-] ] issuanceCriteria: { [-] conditionalCriteria: [ [-] ] } } configuration: { [-] fields: [ [-] { [-] name: Application ObjectClass value: cartmanUser } { [-] name: Application Entitlement Attribute value: cartmanRole } { [-] name: IAL to Enforce value: 2 } ] id: Cartman name: Cartman } } @timestamp: 2025-01-01T00:00:01.833685 }", "{ [-] @message: { [-] attributeContract: { [-] extendedAttributes: [ [-] ] maskOgnlValues: false uniqueUserKeyAttribute: uuid } attributeMapping: { [-] attributeContractFulfillment: { [-] uuid: { [-] source: { [-] type: ADAPTER } value: 550e8400-e29b-41d4-a716-446655440000 } } attributeSources: [ [-] ] issuanceCriteria: { [-] conditionalCriteria: [ [-] ] } } configuration: { [-] fields: [ [-] { [-] name: Application ObjectClass value: cartmanUser } { [-] name: Application Entitlement Attribute value: cartmanRole } { [-] name: IAL to Enforce value: 1 } ] id: Cartman name: Cartman } } @timestamp: 2025-01-02T00:00:01.833685 }") | rex field=data mode=sed "s/\[-]//g s/\n+([\w@])/\n\"\1/g s/([^\"]): (true|false|\d+\n)/\1\": \2/g s/([^\"]):(\W+\n)/\1\":\2/g s/([^\"]): (.+)/\1\": \"\2\"/g s/([\w\"}\]])\n([\"{\[])/\1,\n\2/g" | mvexpand data | rename data AS _raw | spath ``` data type 2 emulation above ``` (Can you see how crazy that rex command is?)   For completeness, this is how you extract data from type 1 in case it is of use to you:   | eval message = replace(message, "'", "") | spath input=message   message field should have been present at search type.  The result from your sample data is UserAccessSubmission.csp UserAccessSubmission.mail UserAccessSubmission.objectClass UserAccessSubmission.trackingId UserAccessSubmission.uuid sourcetype trackingid Butters sean@southpark.net cartmanUser tid:13256464 abc123 type1 tid:13256464 Butters sean@southpark.net cartmanUser tid:13256464 abc123 type1 tid:13256464 Butters sean@southpark.net cartmanUser tid:13256464 abc123 type1 tid:13256464 Butters sean@southpark.net StanUser tid:13256464 abc123 type1 tid:13256464 Butters sean@southpark.net StanUser tid:13256464 abc123 type1 tid:13256464 This is emulation of data type 1 used to extract the above.   | makeresults | fields - _time | eval sourcetype = "type1", data = split("2025-01-01 00:00:00,125 trackingid=\"tid:13256464\"message='{\"UserAccessSubmission\":{\"uuid\":\"abc123\",\"mail\":\"sean@southpark.net\",\"trackingId\":\"tid:13256464\",\"objectClass\":\"cartmanUser\",\"csp\":\"Butters\"}}' 2025-01-01 00:01:00,125 trackingid=\"tid:13256464\"message='{\"UserAccessSubmission\":{\"uuid\":\"abc123\",\"mail\":\"sean@southpark.net\",\"trackingId\":\"tid:13256464\",\"objectClass\":\"cartmanUser\",\"csp\":\"Butters\"}}' 2025-01-02 00:01:00,125 trackingid=\"tid:13256464\"message='{\"UserAccessSubmission\":{\"uuid\":\"abc123\",\"mail\":\"sean@southpark.net\",\"trackingId\":\"tid:13256464\",\"objectClass\":\"cartmanUser\",\"csp\":\"Butters\"}}' 2025-01-02 00:01:00,125 trackingid=\"tid:13256464\"message='{\"UserAccessSubmission\":{\"uuid\":\"abc123\",\"mail\":\"sean@southpark.net\",\"trackingId\":\"tid:13256464\",\"objectClass\":\"StanUser\",\"csp\":\"Butters\"}}' 2025-01-02 00:01:00,125 trackingid=\"tid:13256464\"message='{\"UserAccessSubmission\":{\"uuid\":\"abc123\",\"mail\":\"sean@southpark.net\",\"trackingId\":\"tid:13256464\",\"objectClass\":\"StanUser\",\"csp\":\"Butters\"}}'", " ") | mvexpand data | rename data AS _raw | extract ``` data type 1 emulation above ```    
Hi @dmoberg , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poin... See more...
Hi @dmoberg , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I was able to solve it.   By populating the dropdowns in the Dashboard from an Inputlookup (with data from a scheduled search), it started working to use the approach I detailed below (setting the ... See more...
I was able to solve it.   By populating the dropdowns in the Dashboard from an Inputlookup (with data from a scheduled search), it started working to use the approach I detailed below (setting the FirstLoad Token). Not sure exactly why this made it work but at least it works.
You also need to give permission to the splunk user to write to the volumes you are writing to on the indexer, i.e. if you have a /hot and /cold volume on the indexer, the splunk user needs to have o... See more...
You also need to give permission to the splunk user to write to the volumes you are writing to on the indexer, i.e. if you have a /hot and /cold volume on the indexer, the splunk user needs to have ownership and permissions to write to these volumes.