All Topics

Top

All Topics

Using Boomi for first time as the platform for our website. We also want to use Splunk for web monitoring. Has anyone used both Boomi and Splunk together?
Hi, i am trying to find failed and success from all users with single ip. so it would show like.. 1p 1.1.1.1...users john doe 4 failed and 1 success jay sean... See more...
Hi, i am trying to find failed and success from all users with single ip. so it would show like.. 1p 1.1.1.1...users john doe 4 failed and 1 success jay sean 5 failed 2 success ip1.2.3.4 ...user tim smith 3 failed 1 sucess starting query - index=authentication type=* |.... type is type of failure
Hi, I am looking forward to create a bubble chart like this: https://www.highcharts.com/demo/bubble, where I can click and drag your mouse pointer to make a box in the chart. Every value in the bo... See more...
Hi, I am looking forward to create a bubble chart like this: https://www.highcharts.com/demo/bubble, where I can click and drag your mouse pointer to make a box in the chart. Every value in the box inside the chart, I would be able to pass them as filters to subsequent charts. In the case of the link (https://www.highcharts.com/demo/bubble), you can select a bunch of bubbles (countries) and these countries should act as filters to subsequent charts. Any help? Note: I can do drilldown from one chart to another chart based on a single bar selected (in case of bar chart). I am more interested in the multiselect. Example in D3js: https://observablehq.com/@d3/brushable-scatterplot-matrix https://bl.ocks.org/mthh/e9ebf74e8c8098e67ca0a582b30b0bf0
I want to index all *.log files recursively from /var/log I followed this instruction https://docs.splunk.com/Documentation/Splunk/8.0.2/Data/Specifyinputpathswithwildcards My inputs.conf looks... See more...
I want to index all *.log files recursively from /var/log I followed this instruction https://docs.splunk.com/Documentation/Splunk/8.0.2/Data/Specifyinputpathswithwildcards My inputs.conf looks like this: [monitor:///var/log/] whitelist=\.log$ recursive=true disabled = false index = rpi_logs sourcetype = linux_logs It seems to be indexing only /var/log/daemon.log and var/log/auth.log But I also have log files in /var/log/mysql and /var/log/nginx directories and those are omitted. What am I doing wrong?
When someone gets activated and deactivated this data is consolidated -- always. My question is how can I separate this data based on timestamps? If someone were to get deactivated then reacti... See more...
When someone gets activated and deactivated this data is consolidated -- always. My question is how can I separate this data based on timestamps? If someone were to get deactivated then reactivated afterward, how can I not include this in an input-look up? I have a search that I am using but because I am running into false positives due to what I mentioned above, sometimes someone will get re-activated after deactivation and I need Splunk to differentiate after someone getting re-activated after deactivation is not an alert or an actionable offense. What commands or other parts can I use? Thank you. Search: index="AD" index="ADUser" eventType="SSO" OR eventType="start" | eval from="search" | append [| inputlookup users.csv | table users | eval from="lookup"] | stats values(from) as from by users | where mvcount(from)=2 AND from="lookup
I'm getting ready to upgrade my cluster from 6.4.2 to 6.6.12 to 7.3.4. However, my Replication Factor and Search Factor are net met. Is it OK to proceed with the upgrade anyway? If not, what can be d... See more...
I'm getting ready to upgrade my cluster from 6.4.2 to 6.6.12 to 7.3.4. However, my Replication Factor and Search Factor are net met. Is it OK to proceed with the upgrade anyway? If not, what can be done to fix it? I've searched for resolutions but have not found anything that applies or works yet. RF=2 SF=2 I have a master, 2 indexers/peers and 2 search heads Fixup Tasks in Progress=0 Fixup Tasks Pending=12737 Excess Buckets=0 Most fixup statuses are "cannot fix up search factor as bucket is not serviceable" or "Missing enough suitable candidates to create searchable copy in order to meet replication policy. Missing={ default:1 }"
I'm trying to build a simple app to deploy to all of my Windows hosts to collect the CPU, Memory, HDD, etc that Im looking for using SAI. Created the app and deployed using forwarders... working grea... See more...
I'm trying to build a simple app to deploy to all of my Windows hosts to collect the CPU, Memory, HDD, etc that Im looking for using SAI. Created the app and deployed using forwarders... working great. Only "dimension" i'm collecting at the moment is "entity_type::Windows_Host" but wanting to add more dimensions once inside the SAI app. Adding application names, server classes, etc AFTER I've already ingested the raw data. Is that possible? Part of my inputs.conf (for example): [perfmon://PhysicalDisk] counters = % Disk Read Time;% Disk Write Time instances = * interval = 30 mode = single object = PhysicalDisk index = em_metrics _meta = entity_type::Windows_Host useEnglishOnly = true sourcetype = PerfmonMetrics:PhysicalDisk disabled = 0 Reason being? I want 1 app to deploy to all 10k windows forwarders to collect the data I need, and not having to build customer server classes and apps for every possible bundle of servers and label the dimensions in each app. Would be a huge time saver and more dynamic. Ideas??
{ "message": { "correlation": "12345678", "headers": {}, "protocol": "HTTP/1.1", "remote": "111.11.11.111", "requestMethod": "GET", "requestPath": "/abc/<dynamic_value>/xyz... See more...
{ "message": { "correlation": "12345678", "headers": {}, "protocol": "HTTP/1.1", "remote": "111.11.11.111", "requestMethod": "GET", "requestPath": "/abc/<dynamic_value>/xyz", "type": "request" } } Here from "message.requestMethod" and "message.requestPath" I need to find unique combinations and give some name to it. "message.requestPath" can have different endpoints. Tried something like below which is not working: searchquery | eval api = case (message.requestMethod = GET AND message.requestPath="/abc/<dynamic_value>/xyz", "GET_VERSION_API", message.requestMethod = POST AND message.requestPath="/abc/<dynamic_value>/xyz", "POST_VERSION_API", 1 = 1, "default") | stats count by api
I am attempting to display unique values in a table. Some of the fields are empty and some are populated with the respected data. For example, I only want the following unique fields from each ... See more...
I am attempting to display unique values in a table. Some of the fields are empty and some are populated with the respected data. For example, I only want the following unique fields from each of the events: systemname | domain | os system1 | abc.com | Windows 10 system2 | | Windows 7 system3 | abc.com | Windows 10 system4 | abc.com | Windows 7 system1 | abc.com | Windows 10 system2 | | Windows 7 system3 | abc.com | Windows 10 system4 | abc.com | Windows 7 When I run the command: | dedup systemname, domain, os | table systemname, domain, os I get the following results: system1 | abc.com | Windows 10 system3 | abc.com | Windows 10 system4 | abc.com | Windows 7 The desired result is: system1 | abc.com | Windows 10 system2 | | Windows 7 system3 | abc.com | Windows 10 system4 | abc.com | Windows 7 It is not listing the data with the blank field. I tried various options with using the dedup command such as keepempty=true, but that is not working. I have also tried uniq, but my understanding is that compares the entire record, which is not what I want.
I have a script for Linux that executes "sar -n DEV" and formats the output to look like: Linux <kernel version> (<hostname>) <date> <arch> (<#> CPU) Average: <interface> <field1> ... See more...
I have a script for Linux that executes "sar -n DEV" and formats the output to look like: Linux <kernel version> (<hostname>) <date> <arch> (<#> CPU) Average: <interface> <field1> <field2> <field3> Average: <interface> <field1> <field2> <field3> Average: <interface> <field1> <field2> <field3> Using Splunk Web's field extractor, I have a regex that applies field extraction to the first "Average:" line. How do I make it so the field is applied to as many "Average:" lines exist?
We've added a self signed cert to our haproxy server which passes traffic on to our search head cluster. After doing so, I changed the following on web.conf [settings] tools.proxy.on = true to... See more...
We've added a self signed cert to our haproxy server which passes traffic on to our search head cluster. After doing so, I changed the following on web.conf [settings] tools.proxy.on = true tools.proxy.base = https://internalhost.local Now when trying to visit https://internalhost.local it properly maintains https and redirects to https://internalhost.local/en-US, which then redirects to https://127.0.0.1:8000. I can't seem to find any configuration value in my web or server settings that calls out 127.0.01 which puts me at a loss for what to adjust.
I am trying to create a timechart for a query that returns a count for a set of products that where it's lifecycle status is either compliant or our of compliance. the count is then used to create a ... See more...
I am trying to create a timechart for a query that returns a count for a set of products that where it's lifecycle status is either compliant or our of compliance. the count is then used to create a percentage. The query returns the two counts (which i don't care about) and the associated percentage for each (which is what do want to get into a time chart for the past 90 days. I have the search working but have not been able to figure out how to get the percents (two lines, one chart) into a timechart. Below is my search any help would be appreciated. What i have so far does return a count of events but nothing in a chart and the search itself says no results found index="index" sourcetype=productversion (( removablemedia_win OSType="Windows*" AND LifeCycleStatus!="NewVersion") OR NOT ProductName="") | stats count(LifeCycleStatus) AS lifecycletotal | join type=outer OSType [search index="index" sourcetype=productversion (NOT ProductName="" (OSType="Windows*")) OR ( removablemedia_win AND (OSType="Windows*") AND (LifeCycleStatus="Mainstream" OR LifeCycleStatus="Emerging")) | stats count(LifeCycleStatus) AS IsCompliant] | eval Compliant=(IsCompliant/lifecycletotal)*100 | eval Compliant=round(Compliant,2) | eval NonCompliant=(100-Compliant) | eval NonCompliant=round(NonCompliant,2) | timechart span=1d first(Compliant) as Compliant first(NonCompliant) as NonCompliant
Why does a sub search return a boolean value? I am expecting to see the department value. index="activedirectory" (userPrincipalName=*@emailaddress.ca) | eval From_Sub_Search=tostring([search in... See more...
Why does a sub search return a boolean value? I am expecting to see the department value. index="activedirectory" (userPrincipalName=*@emailaddress.ca) | eval From_Sub_Search=tostring([search index="activedirectory" (userPrincipalName="*@emailaddress.ca") | return department]) | eval From_Department=tostring(department) | table From_Sub_Search, From_Department Search shown below:
Attempt A index="w3c" | rex field=_raw "?(sessionid=?)\w{8}-\w{4}-\w{4}-\w{4}-\w{12}" | table ABC _raw Attempt B index="w3c" | rex field=_raw "\.\sessionid\=\"(?P)[\w{8}]-[\w{4}]-[\w{4}]-[\w{4}]-[... See more...
Attempt A index="w3c" | rex field=_raw "?(sessionid=?)\w{8}-\w{4}-\w{4}-\w{4}-\w{12}" | table ABC _raw Attempt B index="w3c" | rex field=_raw "\.\sessionid\=\"(?P)[\w{8}]-[\w{4}]-[\w{4}]-[\w{4}]-[\w{12}]" | table ABC _raw Attempt C index="w3c" | rex field=_raw "\.\sessionid\=\"(?P[\w{8}]-[\w{4}]-[\w{4}]-[\w{4}]-[\w{12}])" | table ABC _raw (I used a named field ABC, it gets cut from this post) FROM text: .sessionid=d2a4f0de-747f-413c-a823-03ee7d241d5b&hash The GOAL: ABC = d2a4f0de-747f-413c-a823-03ee7d241d5b
I have a splunk query that gives me the different values of an appid and csv file which has a single field called appid .I want to write a query which will give the appid that is not there in csv but... See more...
I have a splunk query that gives me the different values of an appid and csv file which has a single field called appid .I want to write a query which will give the appid that is not there in csv but in the search. Thanks in advance
Is there any (more detailed) documentation and/or other information that can be shared re: the REST API for querying data on the attributes of Synthetic Monitoring Sessions that have been executed?  ... See more...
Is there any (more detailed) documentation and/or other information that can be shared re: the REST API for querying data on the attributes of Synthetic Monitoring Sessions that have been executed?  There is the one community article titled "How do I enable or disable synthetic jobs programmatically using the API?"  https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-enable-or-disable-synthetic-jobs-programmatically-using/ta-p/23455 which provides a working use case, but it is limited to one specific REST operation.  What are our additional options?  I am interested specifically, in pulling the log that would be displayed for any one Session by clicking on the button labelled "Show Script Output".  Want to do that to extract lines matching a regex pattern we would supply, pull out one or more data points resulting from calculations that we perform in the script and write to the session log.  Want to do that on a bulk basis, rather than perform on an individual copy-and-paste out of each log.
after upgrading to 8.0.2 from 7.3.1, splunkweb won't start. after I remove the search activity app it starts again.
after upgrading to 8.0.2 from 7.3.1, splunkweb won't start. after I remove the search activity app it starts again.
**Hi All, I need help extracting {0000000-0000-0000-0000-000000000000} and {0000000-0000-0000-0000-000000000000} from the log sample below during search. This is what i have so far: sourcetype=wi... See more...
**Hi All, I need help extracting {0000000-0000-0000-0000-000000000000} and {0000000-0000-0000-0000-000000000000} from the log sample below during search. This is what i have so far: sourcetype=wineventlog EventCode="4662" Account_Name="\$" Access_Mask=0x100 (Object_Type="%{19195a5b-6da0-11d0-afd3-00c04fd930c9}" OR ObjectT_ype="domainDNS") | rex field=Message "Properties: (?P[^\s]+) {1131f6ad-9c07-11d1-f79f-00c04fc2dcd2} " | rex field=Message "Properties: (?P[^\s]+) {9923a32a-3607-11d2-b9be-0000f87a36b2} " | rex field=Message "Properties: (?P[^\s]+) {1131f6ac-9c07-11d1-f79f-00c04fc2dcd2} " Please help me fix this search.* LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4662 EventType=0 Type=Information ComputerName=gghasfv.net TaskCategory=Directory Service Access OpCode=Info RecordNumber=0000000 Keywords=Audit Success Message=An operation was performed on an object. Subject : Security ID: S-1-5-21-0000000-0000-0000-0000-000000000000 Account Name: NAME$ Account Domain: GOAL Logon ID: GOAL Object: Object Server: DS Object Type: %{0000000-0000-0000-0000-000000000000} Object Name: %{0000000-0000-0000-0000-000000000000} Handle ID: Operation: Operation Type: Object Access Accesses: Control Access Access Mask: 0x100 Properties: Control Access {0000000-0000-0000-0000-000000000000} {0000000-0000-0000-0000-000000000000} Additional Information: Parameter 1: Parameter 2
Hi team! Shuold I upgrade my universal forwarders when after I upgrade my HF? Data > UF > HF > Indexer Right now all is in 6.5.2 version. Indexer and HF will be in 7.3.4 soon. Thanks! ... See more...
Hi team! Shuold I upgrade my universal forwarders when after I upgrade my HF? Data > UF > HF > Indexer Right now all is in 6.5.2 version. Indexer and HF will be in 7.3.4 soon. Thanks! Salut