All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK. It seems... overly complicated. As I understand it, you have customer who has DBConnect's inputs from which data is pulled from production databases, right? But no audit logs, right? And now you... See more...
OK. It seems... overly complicated. As I understand it, you have customer who has DBConnect's inputs from which data is pulled from production databases, right? But no audit logs, right? And now you want to pull the audit logs which are not gonna be sent to Splunk and send them away? That makes no sense. (also I'm not sure all databases actually store audit data in databases themselves; as far as I remember, MySQL logged audit events to normal flat text files; but I haven't worked with it for quite some time so I might be wrong here). Why not use something that will directly connect your source with your destination without forcing Splunk components into doing something they are not meant to do?
OK. Once again - did you "connect MySQL to Splunk using DB Connect" on the Universal Forwarder? How?
I have a question about breaking up a single line of data to send to the Splunk Indexer.  We sending data which can have over 50,000 characters on a single line.  I would like to know if there is a ... See more...
I have a question about breaking up a single line of data to send to the Splunk Indexer.  We sending data which can have over 50,000 characters on a single line.  I would like to know if there is a way to break up the data on the source server with the universal forwarder before sending it to the indexer and then reassemble it after it arrives at the indexer.   We would like to know if this is possible rather than having to increase the Truncate size on the indexer to take all the data at once.  
You overcomplicate your case. <your initial search> will give you a list of printer activites. As a side note you didn't take into account the fact that there is a field called count. I assume it c... See more...
You overcomplicate your case. <your initial search> will give you a list of printer activites. As a side note you didn't take into account the fact that there is a field called count. I assume it can contain a value higher than 1. If it doesn't you can probably use count instead of sum later on. For the naming sake, we'll overwrite the format name | eval size_paper=if(size_paper="11x7","legal",size_paper) Now you can use the paper format to create additional fields based on the paper size value. | eval {size_paper}_jobs=jobs | eval {size_paper}_pages=pages Now you can just aggregate | stats sum(*_jobs) as *_jobs sum(*_pages) as *_pages sum(jobs) as overall_count sum(pages) as overall_pages by prnt_name And all that's left is enriching your results with your lookup contents | lookup printers_csv prnt_name OUTPUT location  
After looking over my initial post, thought I would clarify a little more as to what I am after here.  I am looking to get total print jobs that are "letter", total pages printed that are "letter" an... See more...
After looking over my initial post, thought I would clarify a little more as to what I am after here.  I am looking to get total print jobs that are "letter", total pages printed that are "letter" and total print jobs that are "11x17" (legal), total pages printed that are "11x17" in addition to my initial working query of sum of total print jobs and total pages printed logged by a specific printer  Thanks
Have working query to give me list of all printers, total job count, total page count and show location of printers using a lookup.  Sample Data, Lookup and query is:    Sample Data print logs from ... See more...
Have working query to give me list of all printers, total job count, total page count and show location of printers using a lookup.  Sample Data, Lookup and query is:    Sample Data print logs from index=printer prnt_name   jobs   pages_printed   size_paper CS001             1          5                               letter CS001             1         10                            11x17 CS002             1         20                            11x17 CS003             1         10                             letter CS003             1         15                            11x17 Lookup Data from printers.csv prnt_name   location CS001             office                CS002            dock                CS003            front                   Splunk Query index=printer    | stats count sum(pages_printed) AS tot_prnt_pgs by prnt_name,    | lookup printers.csv prnt_name AS prnt_name OUTPUT location    | stats sum(count) AS print_jobs by prnt_name    | table prnt_name, location, count,  tot_prnt_pgs Splunk Query Results prnt_name     location    count      tot_prnt_pgs  CS001               office         2               15                           CS002               dock           1               20                           CS003               front           2               25        I have been trying to use a (count (eval(if...))) clause but not sure how ot implement it or if that is the correct way to get the results I am after.  I have been using various arguments from other Splunk posts but can't seem to make it work.  Below is the output I am trying to get  Output looking for:  "ltr" represents letter and lgl represents 11x7.   prnt_name     location    count      tot_prnt_pgs    ltr_count    ltr_tot_pgs    lgl_count     lgl_tot pgs CS001               office         2               15                            1                    5                         1                      10 CS002               dock           1               20                            0                    0                         1                      20 CS003               front           2                25                           1                    10                       1                      15 Appreciate any time give on this.
Thanks for your help Giusepe. This is helpful for getting the duration. However, I would also like to table the results from filtering the events in sourcetypeA and having the duration. This solution... See more...
Thanks for your help Giusepe. This is helpful for getting the duration. However, I would also like to table the results from filtering the events in sourcetypeA and having the duration. This solution does not seem to merge the two resulting searches. ex. table _time computerName sessionID filteredInfoIWant1 filteredInfoIwant2 duration
Try messing with custom URL in a markup box but I would not hold out hope.
# simquery command [commands/simquery] access = read : [ admin ], write : [ admin ] export = system This is the default.meta file shipped with the application.  Without additional roles added then i... See more...
# simquery command [commands/simquery] access = read : [ admin ], write : [ admin ] export = system This is the default.meta file shipped with the application.  Without additional roles added then it seems to make sense only admins can utilize the command structure as you have indicated.  I would first start with adding a role that is escalated from just the user level and verify.
We experienced this exact error without much of explanation. But the Add-On documentation states the following in the Troubleshooting Section: Ensure that the Event Stream API has been enabled for t... See more...
We experienced this exact error without much of explanation. But the Add-On documentation states the following in the Troubleshooting Section: Ensure that the Event Stream API has been enabled for the CID The Crowdstrike API Documentation agrees and states the following for Event Streams, specifically if you're a GovCloud customer.   After opening up a ticket with Crowdstrike and asking them to enable the event streams on our CID, the error cleared up and logs began to populate.
Do the certs on the indexer need to be copied to the forwarder, or does the forwarder need its own?
I apologize for the confusion. I am trying to send the data to a third-party application (not a Splunk database). Currently, I am working to connect various databases to Splunk, as the customer alrea... See more...
I apologize for the confusion. I am trying to send the data to a third-party application (not a Splunk database). Currently, I am working to connect various databases to Splunk, as the customer already has databases connected. My goal is to forward the audit logs from each database which is connected to Splunk to our system. We are using Logstash to receive the data, but since Logstash does not have a Splunk input plugin, I am attempting to use TCP to receive data and send it from the Universal Forwarder to a TCP port, which can be set to 514. The architecture looks like this: Splunk (with databases) -> Universal Forwarder -> Logstash -> Analysis.
Hi I try to Ingest macOS logd into Splunk Cloud. When I enable logd input it didn't work. Based on logs it use wrongly "log show" command.   log show --style ndjson --no-backtrace --no-debug --no-... See more...
Hi I try to Ingest macOS logd into Splunk Cloud. When I enable logd input it didn't work. Based on logs it use wrongly "log show" command.   log show --style ndjson --no-backtrace --no-debug --no-info --no-loss --no-signpost --predicate 'subsystem == "com.apple.TimeMachine" && eventMessage CONTAINS[c] "backup"' --start 2024-10-18 16:47:55 --end 2024-10-18 16:48:25   It should be   log show --style ndjson --no-backtrace --no-debug --no-info --no-loss --no-signpost --predicate 'subsystem == "com.apple.TimeMachine" && eventMessage CONTAINS[c] "backup"' --start "2024-10-18 16:47:55" --end "2024-10-18 16:48:25"   Have anyone noticed this and have anyone any fix for it or should I just create a support ticket? r. Ismo
Hello @Kashinath.Kumbharkar, Thanks for posting questions in our community. You could use machine agent with extension to report aws cloudwatch metrics to controller. Please kindly check the do... See more...
Hello @Kashinath.Kumbharkar, Thanks for posting questions in our community. You could use machine agent with extension to report aws cloudwatch metrics to controller. Please kindly check the documentation: https://docs.appdynamics.com/appd/24.x/latest/en/infrastructure-visibility/machine-agent/extensions-and-custom-metrics Here are some extensions:  https://github.com/Appdynamics/aws-cloudwatch-exts-commons https://github.com/Appdynamics/aws-customnamespace-monitoring-extension Please note  all extensions are now available under an open-source model, where you have direct access to the source code and may evolve them to suit your needs or build new use cases.  Reference: https://docs.appdynamics.com/paa/en/appdynamics-support-advisories/support-advisory-changes-to-extensions-support-model Hope this helps. Best regards, Xiangning Cc: @Ryan.Paredez 
@ITWhisperer Thanks. This was helpful.I tweaked it to include more aggregate function
Did this get resolved ? We are also facing the same issue. 
Hello team, I am confused to see multiple apps of Carbon Black for SOAR. Can you please suggest which one is preferable in which use case? 
index=myindex RecordType=abc ClassName IN ( "ClassA", "ClassB", "ClassC") | bucket _time span=1d | stats avg(cpuTime) as avgCpuTime by ClassName _time | xyseries ClassName _time avgCpuTime | eval "%R... See more...
index=myindex RecordType=abc ClassName IN ( "ClassA", "ClassB", "ClassC") | bucket _time span=1d | stats avg(cpuTime) as avgCpuTime by ClassName _time | xyseries ClassName _time avgCpuTime | eval "%Reduction"=round(100*('16-Oct-24'-'17-Oct-24')/'16-Oct-24',0)
Hi @rolfkuper , When you say "nothing happens", you mean literally nothing?  No errors or dialog or anything?  I would have said that you're missing AGREETOLICENSE=Yes on that command line, but I do... See more...
Hi @rolfkuper , When you say "nothing happens", you mean literally nothing?  No errors or dialog or anything?  I would have said that you're missing AGREETOLICENSE=Yes on that command line, but I don't understand why nothing at all would happen... You could try adding the following to the command line, and then maybe there'll be some hints in the log file:   /l*vx msiexec.log   Cheers,    - Jo.  
Iam using splunk to generate as below.It is run for 2 days date range where am trying to compare the count ClassName 16-Oct-24 17-Oct-24 ClassA 544 489 ClassB 39 47 ClassC 193... See more...
Iam using splunk to generate as below.It is run for 2 days date range where am trying to compare the count ClassName 16-Oct-24 17-Oct-24 ClassA 544 489 ClassB 39 47 ClassC 1937 2100   My splunk query is as under index=myindex RecordType=abc ClassName IN ( "ClassA", "ClassB", "ClassC") | bucket _time span=1d | stats avg(cpuTime) as avgCpuTime by ClassName _time | xyseries ClassName _time avgCpuTime I need below output which has an extra column that gives the comparision.How can we tweak this query?Is there another way to achieve this in more visually appealing manner ClassName 16-Oct-24 17-Oct-24 %Reduction ClassA 544 489 10% ClassB 39 47 -21% ClassC 1937 2100 -8%