All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@rohithvr19  Your script won't work on my machine so I have created a sample script which returns simple "Hello world" text on click of dashboard button. You just create a similar configuration and ... See more...
@rohithvr19  Your script won't work on my machine so I have created a sample script which returns simple "Hello world" text on click of dashboard button. You just create a similar configuration and Python file as per your requirement.  Below is the code and file/folder structure.     hello_world.py import splunk, sys from json import dumps class HelloWorld(splunk.rest.BaseRestHandler): ''' Class for service custom endpoint. ''' def handle_POST(self): ''' This endpoint handler :return: None ''' payload = { "text": "Hello world!" } response = dumps({"data": payload, "status": "OK", "error": "None"}) self.response.setHeader('content-type', 'application/json') self.response.write(response) #handle verbs, otherwise Splunk will throw an error handle_GET = handle_POST   restmap.conf [script:my_custom_endpoint] match = /my_custom_endpoint handler = hello_world.HelloWorld   web.conf [expose:my_custom_endpoint] pattern = my_custom_endpoint methods = GET, POST   XML <dashboard script="fetch_data.js" version="1.1"> <label>My Dashboard</label> <description>Dynamic Result Example</description> <row> <panel> <html> <div> <button id="fetch-data-button">Fetch Data</button> <div id="div_result" style="margin-top: 10px; border: 1px solid #ccc; padding: 10px;">Result will be displayed here.</div> </div> </html> </panel> </row> </dashboard>   fetch_data.js require([ 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function($, mvc) { $('#fetch-data-button').on('click', function() { var service = mvc.createService(); service.post('/services/my_custom_endpoint', {}, function (err, response) { console.log(response); console.log(response.data); console.log(response.data.data); console.log(response.data.data.text); $('#div_result').html(response.data.data.text); }); return false; }); });   Screenshot       Try this code to learn and understand the custom endpoint and develop a new endpoint as per your needs.   I hope this will help you. Thanks KV An upvote would be appreciated if any of my replies help you solve the problem or gain knowledge.    
Hi @michael_vi , a ServerClass is a relation table between a list of hosts and a list of apps to be deployed to the hosts, so you can move apps between ServerClasses without any problem, putting obv... See more...
Hi @michael_vi , a ServerClass is a relation table between a list of hosts and a list of apps to be deployed to the hosts, so you can move apps between ServerClasses without any problem, putting obviously attention to cover all the hosts. Ciao. Giuseppe
Hi @Richy_s , as I said (and I say this aligned with my second role in my Company: privacy and ISO27001 Lead Auditor!), the only way to mask PII is to analyze your new data stored in a temporary ind... See more...
Hi @Richy_s , as I said (and I say this aligned with my second role in my Company: privacy and ISO27001 Lead Auditor!), the only way to mask PII is to analyze your new data stored in a temporary index, finding a list of controls. Then you can implement these rules in props and transforms, as described in the below link. Then you can prepare an alert, to run e.g. once a day, with the same controls on all the data archived in the day. If the alert will find something, it means that you have to extend your checks to other data. It isn't possible to run these controls before indexing because Splunk searches run on indexed data, the only other solution could be: index all data in temporary indexes, not accessibel to users, execute checks, mask eventual found data, copy all the data in the final indexes accessible to users. The only issue is that, in this way, you duplicate the license consuption! Ciao. Giuseppe
You could try something like this [metadata_subsecond] SOURCE_KEY = _meta REGEX = \_subsecond\:\:(\.\d+) FORMAT = $1 $0 DEST_KEY = subsecond_temp [metadata_fix_subsecond] INGEST_EVAL = _raw=if(i... See more...
You could try something like this [metadata_subsecond] SOURCE_KEY = _meta REGEX = \_subsecond\:\:(\.\d+) FORMAT = $1 $0 DEST_KEY = subsecond_temp [metadata_fix_subsecond] INGEST_EVAL = _raw=if(isnull(subsecond_temp),_raw,subsecond_temp." "._raw)  Of course you need to add the metadata_fix_subsecond transform into TRANSFORMS-zza-syslog before metadata_subsecond The number of backslashes in that subsecond regex is surprisingly high. Those characters shouldn't normally need escaping.
Splunk's version of arrays is multivalue field, so if you change you input to a multivalue field, you could do something like this | eval Tag = split(lower("Tag3,Tag4"),",") | spath | foreach *Tags{... See more...
Splunk's version of arrays is multivalue field, so if you change you input to a multivalue field, you could do something like this | eval Tag = split(lower("Tag3,Tag4"),",") | spath | foreach *Tags{} [| eval field="<<FIELD>>" | foreach <<FIELD>> mode=multivalue [| eval tags=if(isnull(tags),if(mvfind(Tag,lower('<<ITEM>>')) >= 0, field, null()),mvappend(tags, if(mvfind(Tag,lower('<<ITEM>>')) >= 0, field, null())))] ] | stats values(tags)
OK. Let me rephrase it. This is a typical attempt to "fix" policy issues with technical means. Without _knowing_ where the PII is you're doomed to guess. And guessing is never accurate. BTDTGTT
I completely agree with what you've stated below @gcusello @isoutamo @ITWhisperer @PickleRick , and I'm on the same page. However, as you know, compliance principles operate on the premise that wheth... See more...
I completely agree with what you've stated below @gcusello @isoutamo @ITWhisperer @PickleRick , and I'm on the same page. However, as you know, compliance principles operate on the premise that whether an issue is present or not, it's best to assume it is and address it accordingly. In my situation, we mainly deal with network-related data where the likelihood of finding PII is very low. Nonetheless, as a security requirement, we want to establish controls that ensure any sensitive information, if present, is masked.
Had I not chosen the solution already I would have given it to you for a more comprehensive answer
not really, the main point in here is that my input to this query, instead of a simple value would be an array. e.g. current input format:  | eval Tag = "Tag1" desired input format:  | eval... See more...
not really, the main point in here is that my input to this query, instead of a simple value would be an array. e.g. current input format:  | eval Tag = "Tag1" desired input format:  | eval Tags = ["Tag3", "Tag4"]
for this   | eval Tags = ["Tag3", "Tag4] | spath | foreach *Tags{} [| eval tags=mvappend(tags, if(lower('<<FIELD>>') = lower(Tag), "<<FIELD>>", null()))] | dedup tags | stats values(tags) ... See more...
for this   | eval Tags = ["Tag3", "Tag4] | spath | foreach *Tags{} [| eval tags=mvappend(tags, if(lower('<<FIELD>>') = lower(Tag), "<<FIELD>>", null()))] | dedup tags | stats values(tags)   I would like to get     Info.Apps.MessageQueue.ReportTags{} Info.Apps.ReportingServices.ReportTags{} Info.Apps.MessageQueue.UserTags{}​  
Hello Team We want to monitor our AWS OPensearch resources over Appdynamics and we had configured the AWS Opensearch cloudwatch extension, but unfortunately it is throughing the below error. "ERR... See more...
Hello Team We want to monitor our AWS OPensearch resources over Appdynamics and we had configured the AWS Opensearch cloudwatch extension, but unfortunately it is throughing the below error. "ERROR AmazonElasticsearchMonitor - Unfortunately an issue has occurred: java.lang.NullPointerException: null at com.appdynamics.extensions.aws.elasticsearch.AmazonElasticsearchMonitor.createMetricsProcessor(AmazonElasticsearchMonitor.java:77) ~[?:?] at com.appdynamics.extensions.aws.elasticsearch.AmazonElasticsearchMonitor.getNamespaceMetricsCollector(AmazonElasticsearchMonitor.java:45) ~[?:?] at com.appdynamics.extensions.aws.elasticsearch.AmazonElasticsearchMonitor.getNamespaceMetricsCollector(AmazonElasticsearchMonitor.java:36) ~[?:?] at com.appdynamics.extensions.aws.SingleNamespaceCloudwatchMonitor.getStatsForUpload(SingleNamespaceCloudwatchMonitor.java:31) ~[?:?] at com.appdynamics.extensions.aws.AWSCloudwatchMonitor.doRun(AWSCloudwatchMonitor.java:102) [?:?] at com.appdynamics.extensions.AMonitorJob.run(AMonitorJob.java:50) [?:?] at com.appdynamics.extensions.ABaseMonitor.executeMonitor(ABaseMonitor.java:199) [?:?] at com.appdynamics.extensions.ABaseMonitor.execute(ABaseMonitor.java:187) [?:?] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.MonitorTaskRunner.runTask(MonitorTaskRunner.java:149) [machineagent.jar:Machine Agent v24.9.1.4416 GA compatible with 4.4.1.0 Build Date 2024-10-03 14:53:45] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.PeriodicTaskRunner.runTask(PeriodicTaskRunner.java:86) [machineagent.jar:Machine Agent v24.9.1.4416 GA compatible with 4.4.1.0 Build Date 2024-10-03 14:53:45] at com.singularity.ee.agent.systemagent.components.monitormanager.managed.PeriodicTaskRunner.run(PeriodicTaskRunner.java:47) [machineagent.jar:Machine Agent v24.9.1.4416 GA compatible with 4.4.1.0 Build Date 2024-10-03 14:53:45] at com.singularity.ee.util.javaspecific.scheduler.AgentScheduledExecutorServiceImpl$SafeRunnable.run(AgentScheduledExecutorServiceImpl.java:122) [agent-24.10.0-891.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?] at com.singularity.ee.util.javaspecific.scheduler.ADFutureTask$Sync.innerRunAndReset(ADFutureTask.java:335) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADFutureTask.runAndReset(ADFutureTask.java:152) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.access$101(ADScheduledThreadPoolExecutor.java:128) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.runPeriodic(ADScheduledThreadPoolExecutor.java:215) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADScheduledThreadPoolExecutor$ADScheduledFutureTask.run(ADScheduledThreadPoolExecutor.java:253) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADThreadPoolExecutor$Worker.runTask(ADThreadPoolExecutor.java:694) [agent-24.10.0-891.jar:?] at com.singularity.ee.util.javaspecific.scheduler.ADThreadPoolExecutor$Worker.run(ADThreadPoolExecutor.java:726) [agent-24.10.0-891.jar:?] at java.lang.Thread.run(Thread.java:829) [?:?]" Can someone help here. We have used the below github code base for the same. https://github.com/Appdynamics/aws-elasticsearch-monitoring-extension
That was actually my first idea as well, but both our DNS servers are reachable, tcpdump shows no activity on port 53 during those 19s and Splunk is even able to reverse lookups on the sending device... See more...
That was actually my first idea as well, but both our DNS servers are reachable, tcpdump shows no activity on port 53 during those 19s and Splunk is even able to reverse lookups on the sending devices' IPs.
| eval overhead=(totaltime - routingtime) | appendpipe [| bin span=1s _time | stats avg(overhead) as overhead by _time | eval hostname="Overall"] | timechart span=1s eval(round(avg(overhe... See more...
| eval overhead=(totaltime - routingtime) | appendpipe [| bin span=1s _time | stats avg(overhead) as overhead by _time | eval hostname="Overall"] | timechart span=1s eval(round(avg(overhead),1)) by hostname
Yes, it is in deed. I thought of that, but I assume that the creators at SC4S probably wanted the timestamp to have the fraction seconds added to it, if there is a metadata variable holding that. In ... See more...
Yes, it is in deed. I thought of that, but I assume that the creators at SC4S probably wanted the timestamp to have the fraction seconds added to it, if there is a metadata variable holding that. In that case, the decimal point and the fraction seconds need to follow the timestamp without any whitespace. That is the reason the whitespace is missing where you pointed it out. If there isn't a variable holding the fraction seconds however, like in my case, there will be no trailing space added to the timestamp, and the host key-value pair will follow it right without a whitespace. Any idea how I could add a whitespace conditionally?
Do you mean this?   | makeresults | eval _raw = "{ \"Info\": { \"Apps\": { \"ReportingServices\": { \"ReportTags\": [ \"Tag1\" ... See more...
Do you mean this?   | makeresults | eval _raw = "{ \"Info\": { \"Apps\": { \"ReportingServices\": { \"ReportTags\": [ \"Tag1\" ], \"UserTags\": [ \"Tag2\", \"Tag3\" ] }, \"MessageQueue\": { \"ReportTags\": [ \"Tag1\", \"Tag4\" ], \"UserTags\": [ \"Tag3\", \"Tag4\", \"Tag5\" ] }, \"Frontend\": { \"ClientTags\": [ \"Tag12\", \"Tag47\" ] } } } }" | eval Tag = "Tag1" | spath | foreach *ReportTags{} [| eval tags=mvappend(tags, if(lower('<<FIELD>>') = lower(Tag), '<<FIELD>>', null()))] | dedup tags | stats values(tags)   This gives values(tags) Tag1 Tag4 Note when you use double quote on the right-hand side of an eval expression, in quoted entity is used as literal, therefore your original search gives values(tags) Info.Apps.MessageQueue.ReportTags{} Info.Apps.ReportingServices.ReportTags{}
Don't know about this particular case but consistent delays on connection init are often caused by DNS issues (either DNS timeouts on resolving the host to connect to or delays on the receiving side ... See more...
Don't know about this particular case but consistent delays on connection init are often caused by DNS issues (either DNS timeouts on resolving the host to connect to or delays on the receiving side due to attempts of resolving the IP back to hostname of the source host).
I mean, if I change a Server Class in Deployment Server from one to another. Everything else stays the same.
Here is Splunk Validated Architectures https://docs.splunk.com/Documentation/SVA/current/Architectures/TopologyGuidance
Hi @danielbb , as also @isoutamo and @kiran_panchavat said, 8089 is a management port that cannot be used via GUI, in addition, connections using 8089 are all in https, not http. Ciao. Giuseppe
Hi @michael_vi , sorry but your question isn't so clear: what do you mean with "app class"? are you speaking od an add-on for iput data or what else? Splunk doesn't reindex twice the same data ev... See more...
Hi @michael_vi , sorry but your question isn't so clear: what do you mean with "app class"? are you speaking od an add-on for iput data or what else? Splunk doesn't reindex twice the same data even if you change the data filename. The only way to reindex an already idexed data is if you used crcSal = <SOUCE> in your inputs.conf stanzas and you changed the data filename. Final question: all the changes to a conf file (not by GUI) require a splunk restart on the machine. Ciao. Giuseppe