Hi @bhall_2 , are you speaking of Edge Processor? if yes, you an find documentation at https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/EdgeProcessor/AboutEdgeProcessorSolution Ciao. Gi...
See more...
Hi @bhall_2 , are you speaking of Edge Processor? if yes, you an find documentation at https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/EdgeProcessor/AboutEdgeProcessorSolution Ciao. Giuseppe
I have this rule, I need it to trigger when results / count of events is greater than 4 but the "Trigger Condition" did not work. Is there something I can add to the query ?
Honestly - I have no idea what you are talking about. Could you elaborate a bit more what such thing would do? Maybe it's possible to implement it using existing components. Or maybe it's simply imp...
See more...
Honestly - I have no idea what you are talking about. Could you elaborate a bit more what such thing would do? Maybe it's possible to implement it using existing components. Or maybe it's simply impossible. But to answer such question it's necessary to understand it first
Unfortunately, at the moment Splunk cannot automatically extract the structured data if it's not the whole event (as in your case - the json part is preceeded by non-json header). There is an open i...
See more...
Unfortunately, at the moment Splunk cannot automatically extract the structured data if it's not the whole event (as in your case - the json part is preceeded by non-json header). There is an open idea for that https://ideas.splunk.com/ideas/EID-I-208 So far you can either parse the json part in search with help of the spath command as @VatsalJagani already showed or cut away the header part using SEDCMD or INGEST_EVAL (possibly extracting indexed fields if needed prior to removing the non-structured part). As a side note - you should _not_ use the _json sourcetype. Define your own
You have a search element within a search element. If you see here https://docs.splunk.com/Documentation/Splunk/9.2.0/Viz/PanelreferenceforSimplifiedXML#search search element is not allowed as a chi...
See more...
You have a search element within a search element. If you see here https://docs.splunk.com/Documentation/Splunk/9.2.0/Viz/PanelreferenceforSimplifiedXML#search search element is not allowed as a child of a search element.
It's not all that's at play here but you're creating a whole lot of files (you could just create a key with -nodes option to have it non-encrypted) and your config apparently points to splunk.key whi...
See more...
It's not all that's at play here but you're creating a whole lot of files (you could just create a key with -nodes option to have it non-encrypted) and your config apparently points to splunk.key which - judging by the sequence of commands - is encrypted. As a side note - putting your private key into an app is not necessarily the most secure thing to do.
When you say "one has" and "the other has" it suggests you have only two indexers. That's not a very good cluster design. As a rule of thumb you should have at least RF+1 indexers so that in case of ...
See more...
When you say "one has" and "the other has" it suggests you have only two indexers. That's not a very good cluster design. As a rule of thumb you should have at least RF+1 indexers so that in case of a disaster the cluster can recover to a complete state. (you can compare it to a RAID setup with a hot-spare) But that's not the main point. The main point is that if Replication Factor is not met it means that for some reason not all buckets are available in many enough copies. You can check which buckets are where with help of the dbinspect command and then look for messages regarding that particular missing bucket in the _internal log. That should give you some hint about the cause of the missing copy.
I'm struggling to find a solution to this too. I've got a format block to grab out 5 values from the haveibeenpwned API and one is always returned as an array. From there, have a format block to cyc...
See more...
I'm struggling to find a solution to this too. I've got a format block to grab out 5 values from the haveibeenpwned API and one is always returned as an array. From there, have a format block to cycle through and create a markup table: Just trying to get the "Data Compromised" table to appear as a string without the any of the [ ' ] symbols.
This capability does (almost) exactly what it says - lets you dispatch a REST call (via the | rest command) to the indexers (to configured search peers, to be precise - in some cases (typically a Mon...
See more...
This capability does (almost) exactly what it says - lets you dispatch a REST call (via the | rest command) to the indexers (to configured search peers, to be precise - in some cases (typically a Monitoring Console) you might want to REST against a non-indexer peer). Without it you can only call |rest to your local instance. As far as I remember, that capability is not available for users in Cloud.
Hi @scelikok I agree with you, I would show you my props and transforms conf file props.conf [custom_syslog]
transforms-rebuild = group1
SHOULD_LINEMERGE = false Transforms [group1]
REGEX = ...
See more...
Hi @scelikok I agree with you, I would show you my props and transforms conf file props.conf [custom_syslog]
transforms-rebuild = group1
SHOULD_LINEMERGE = false Transforms [group1]
REGEX = (?<group1>.+\s\-\s\-\s\-\s).*.auditID.:.(?<group2>[\w-]+)..*requestURI.:.(?<group3>[^,]+).+username.:.(?<group4>[^,]+).+sourceIPs....(?<group5>\d+.\d+.\d+.\d+)
FORMAT = group1::$1, group2::$2, group5::$3, group3::$4, group4::$5 Did I forget something in the conf files? Regards Alessandro
Hi @richgalloway , I added only "% Processor Time" & "Working Set - Private" in Process source stanza it is taking all the other 28 counters. Before counters = * > then it shows all the 28 coun...
See more...
Hi @richgalloway , I added only "% Processor Time" & "Working Set - Private" in Process source stanza it is taking all the other 28 counters. Before counters = * > then it shows all the 28 counters. But, now I added only 2 counters ("% Processor Time" & "Working Set - Private") even though it is showing other counters also. Please help me with this.. Thanks, Pooja
Hi All, I have a question. What exactly 'Dispatch_rest_to_indexers' mean ? I am getting warning when running rest command and I am on splunk cloud. Restricting results of the "rest" operator t...
See more...
Hi All, I have a question. What exactly 'Dispatch_rest_to_indexers' mean ? I am getting warning when running rest command and I am on splunk cloud. Restricting results of the "rest" operator to the local instance because you do not have the "dispatch_rest_to_indexers" capability. I see many blogs talking about this message but I did not come across clear explanation on what does this parameter exactly mean ? Dispatch_rest_to_indexers . What does this exactly do ? Please can anyone throw some light on this . Thanks in Advance, PNV
I am getting below error while making mssql connection with db connect 3.15.0 HTTPConnectionPool(host='127.0.0.1', port=9998): Read timed out. (read timeout=310) Can anyone help me out.
Looking at the Splunk add on for Cyber Ark, it appears the process is flawed in that the Cyber Ark supplied ./Syslog/RFC5424Changes.xsl fragment generates a syslog timestamp from the first syslog/a...
See more...
Looking at the Splunk add on for Cyber Ark, it appears the process is flawed in that the Cyber Ark supplied ./Syslog/RFC5424Changes.xsl fragment generates a syslog timestamp from the first syslog/audit_record/IsoTimestamp but the code in forExport/SplunkCIM.xsl then generates multiple CEF-like events on a 'single line' for, the possibly multiple, audit_record's and hold no timestamps Thus if the XSLT iterates over more than one event, not only do the timestamps for the individual events get discarded, one possibly ends up with a single CEF like event with multiple key value pairs where the keys are repeated. Basically it appears that multiple Cyber Ark events are concatenated together into one syslog record without any clear form of event separation and the timestamps for the 2nd and subsequent events are lost.
I can think of at least one valid use case for using multiple timezones - if your team works globally and wants to know what local time at the originating site is (for example to decide whether somet...
See more...
I can think of at least one valid use case for using multiple timezones - if your team works globally and wants to know what local time at the originating site is (for example to decide whether something happened during workhours or not). Yes, it could be done differently but showing local times is the most natural thing. But I admit that it's a relatively rare use case and allowing users to easily display dates in various timezones (especially without explicit timezone information in the rendered timestamp) can lead to a huge load of confusion and badly created dashboards.
Well I do not get how to fix that, I cant see a dashboard with faulty buckets. I dont know what buckets are at fault, the bucket count is still different on both peers.
@khj Hello, You don't get the Splunk DB connect add-on file which was not supported to your Splunk version 7.3.4. I would suggest you to upgrade your splunk environment and install the suitable db co...
See more...
@khj Hello, You don't get the Splunk DB connect add-on file which was not supported to your Splunk version 7.3.4. I would suggest you to upgrade your splunk environment and install the suitable db connect add-on version. System requirements - Splunk Documentation
Hello Splunkers! I am downloading the eStreamer TA for Splunk (Cisco Secure Firewall app), and I am facing the issue below: /opt/splunk/etc/apps/TA-eStreamer/bin/encore$ openssl pkcs12 -in clien...
See more...
Hello Splunkers! I am downloading the eStreamer TA for Splunk (Cisco Secure Firewall app), and I am facing the issue below: /opt/splunk/etc/apps/TA-eStreamer/bin/encore$ openssl pkcs12 -in client.pkcs12 -nocerts -nodes -out "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/10.1.50.10-8302_pkcs.key"
Enter Import Password:
Error outputting keys and certificates
802B49170A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:../crypto/evp/evp_fetch.c:349:Global default library context, Algorithm (RC2-40-CBC : 0), Properties () I understand that this error is related to Python package, but I can see that Python is already installed: Can anyone help me?