All Topics

Top

All Topics

I have the following fabricated search which is a pretty close representation of what I actually want to do and gives me the results I want... (index=_audit (action=search OR action=GET_PASSWORD)) O... See more...
I have the following fabricated search which is a pretty close representation of what I actually want to do and gives me the results I want... (index=_audit (action=search OR action=GET_PASSWORD)) OR (index=_internal [ search index=_audit (action=search OR action=GET_PASSWORD) | dedup user | table user] ) | stats count(eval(index="_audit")) as count, values(clientip) as clientip,count(eval(index="_internal")) as internalCount by user i.e for everyone who has performed a search or GET_PASSWORD in one index, I want to know something about them gathered from both indexes.  I can't get past the feeling that I shouldn't need to repeat the "index=_audit (action=search OR action=GET_PASSWORD)" search, which in the actual search is whole lot of SPL, so duplicating it makes things untidy.  Macros aside, can anyone come up with a more elegant solution?
Good morning,  Getting a weird error this morning when trying to run searches. It is saying that m license is expired, or I have exceeded your license limits too many times.    1. I have a valid E... See more...
Good morning,  Getting a weird error this morning when trying to run searches. It is saying that m license is expired, or I have exceeded your license limits too many times.    1. I have a valid Enterprise License at about 750GB a day 2. Within the license manager all is well. No violations, valid license, etc.  3. Peers are associated to the license group (750GB is what I allocated for it)  4. Everything looks green with no messages    Not sure what is causing this issue but sometimes search will work and sometimes it wont. However, it  will always throw the litsearch error. 
We like to say, the lightsaber is to Luke as Splunk is to Duke. Curious yet? Then read Eric Fusilero’s latest blog about the thrilling saga of Duke Cyberwalker, a fresh college grad turned cybersecur... See more...
We like to say, the lightsaber is to Luke as Splunk is to Duke. Curious yet? Then read Eric Fusilero’s latest blog about the thrilling saga of Duke Cyberwalker, a fresh college grad turned cybersecurity hero. It’s not just a creative and engaging narrative, it's a parable about daily professional challenges and growth. Join Duke on his epic adventure and discover how you, too, can transform the mundane into an adventure with Splunk.     Here’s a sneak peek into the transformative stages of his hero’s journey:    The Ordinary World: Duke begins his journey as a young, brilliant coder just out of college. He dreams of adventure but despite his potential, he struggles with self-doubt.    The Call to Adventure: Duke’s life takes a turn when he is asked to help thwart a cyber attack at his mother’s mid-sized retail business. Although her company uses Splunk, it was attacked by Black Hat Bot, which is stealing company data, sending out false and alarming information, and creating an environment of distrust. She doesn’t have a cybersecurity expert on site to optimize the platform and disarm the bot. She looks to Duke for help.   Refusal of the Call: At first, Duke is hesitant to get involved because he doesn’t feel like he is qualified to take on Black Hat Bot. He has yet to put his cybersecurity skills to the test and hasn’t yet worked with Splunk. He does not feel confident enough to take on Black Hat Bot.    Read the full blog here.
ITSI for Alert $result.service_name$ on host $result.src$ $result.description$ An event has been detected: Host: $result.host$ Source: $result.source$ Error Code: $result.error_code$ Description... See more...
ITSI for Alert $result.service_name$ on host $result.src$ $result.description$ An event has been detected: Host: $result.host$ Source: $result.source$ Error Code: $result.error_code$ Description: $result.description$ I'm fairly new to ITSI and Splunk in general and I couldn't find out any information on tokens that clearly. The only token that is working right now is $result.description$,. Any assistance will be much appreciated.    Thank you  
Hi  I have an event which has prod and test based on env...if it is test it goes to nsps [{},{} ] object an check for the name say A,B,C,D an get their associate  ReadOnlyConsumerNames in tabular fo... See more...
Hi  I have an event which has prod and test based on env...if it is test it goes to nsps [{},{} ] object an check for the name say A,B,C,D an get their associate  ReadOnlyConsumerNames in tabular format Output as: Name      ReadOnlyConsumerNames  A               Application, Lst,data B               Application, Lst C             Lst D            Lst,Gt,PT       { [-] prod: { [] } test: { [-] DistinctAdminConsumers: [ [-] App pd. ] DistinctAdminUser: 2 DistinctReadConsumers: [ [-] Application. GT. Technology. data ] DistinctReadUser: 4 TotalAdminUser: 20 TotalNSPCount: 10 TotalReadUsers: 13 nsps: [ [-] { [-] AdminConsumerNames: [ [-] App. pd. ] AdminUserCount: 2 Name: A ReadOnlyConsumerNames: [ [-] Application Lst data ] ReadonlyUserCount: 3 } { [-] AdminConsumerNames: [ [-] App Data ] AdminUserCount: 2 Name: B ReadOnlyConsumerNames: [ [-] Application Lst ] ReadonlyUserCount: 3 } { [-] AdminConsumerNames: [ [-] preprod pd ] AdminUserCount: 2 Name: C ReadOnlyConsumerNames: [ [-] Lst ] ReadonlyUserCount: 1 } { [-] AdminConsumerNames: [ [+] ] AdminUserCount: 2 Name: D ReadOnlyConsumerNames: [ [-] Lst Gt PT ] ReadonlyUserCount: 1 } ] } }    
Hey im trying to play sounds in my dashboard studio dashboard. I heard its not possible because dashboard studio is not as customizable as classic dashboard. Does anyone know any workaround before I'... See more...
Hey im trying to play sounds in my dashboard studio dashboard. I heard its not possible because dashboard studio is not as customizable as classic dashboard. Does anyone know any workaround before I'll have to switch to classic dashboard?
I am working on a dashboard that has a bunch of field and will be used by multiple teams and people who will be needing different fields from the table.  Is there anyway to add a toggle or filter or... See more...
I am working on a dashboard that has a bunch of field and will be used by multiple teams and people who will be needing different fields from the table.  Is there anyway to add a toggle or filter or anything similar to give a couple of presets (ex fields A D E H to preset 1 for team 1, fields B C D F G to preset 2 for team 2 and so on) I also use filters on fields in the dashboard table as well if possible i would want hiding of the field to not impact the filters at all.  Thanks in advance.
Hello everyone, I have built a dashboard with dashboard studio but in the panels I have noticed that you can use many properties but you cannot change the position of the markdown text. I have alre... See more...
Hello everyone, I have built a dashboard with dashboard studio but in the panels I have noticed that you can use many properties but you cannot change the position of the markdown text. I have already tried to see the documentation but to no avail (maybe I am missing something). By changing position I also mean simply aligning the panel text left,centre,right inside. Do you have any ideas? Thank you, biwanari
Hello , i have a common log file which same name in both production and stage with different name for sourcetype. As i don't want that logs to be ingested from Production i have added below entry... See more...
Hello , i have a common log file which same name in both production and stage with different name for sourcetype. As i don't want that logs to be ingested from Production i have added below entry in props.conf. [source::<Log file path>] Transforms-null= setnull   transforms.conf [setnull] REGEX = BODY DEST_KEY = queue FORMAT = nullQueue [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue   But i want same log file from stage and not from production - in props.conf adding the sourctype of prod will restrict the logs from production and ingest the logs from stage where sourcetype name is different?? [source::<Log file path>] [sourcetype = <Prod Sourcetype>] Transforms-null= setnull   in Addition - Prod Source Type i have other two logs and i don't want that get stopped because of this configuration changes. Thanks
The IP address keeps changing with the same error. Forwarder Ingestion Latency Cause(s) d'origine : Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 272... See more...
The IP address keeps changing with the same error. Forwarder Ingestion Latency Cause(s) d'origine : Indicator 'ingestion_latency_gap_multiplier' exceeded configured value. The observed value is 272246. Message from D97C3DE9-B0CE-408F-9620-5274BAC12C72:192.168.1.191:50409 How do you solve the problem?
Hi Community, i have a data source, that submit sometimes faulty humidity data like 3302.4 Percent. To clean / delete this outlier events, i buil a timechart avg to get the real humidity curve,... See more...
Hi Community, i have a data source, that submit sometimes faulty humidity data like 3302.4 Percent. To clean / delete this outlier events, i buil a timechart avg to get the real humidity curve, and from this curve i get the max and min with stats  to get the upper and bottom from this curves. ...but my search wont work, and i need your help, here is a makeresult sample: | makeresults format=json data="[{\"_time\":\"1729115947\", \"humidity\":70.7},{\"_time\":\"1729115887\", \"humidity\":70.6},{\"_time\":\"1729115827\", \"humidity\":70.5},{\"_time\":\"1729115762\", \"humidity\":30.9},{\"_time\":\"1729115707\", \"humidity\":70.6}]" [ search | timechart eval(round(avg(humidity),1)) AS avg_humidity | stats min(avg_humidity) as min_avg_humidity ] | where humidity < min_avg_humidity ```| delete ```
Hi Team, I am fetching unique "ITEM" values from first sql query running on one database. Then passing those values to another sql query to fetch the corresponding values in the second database. ... See more...
Hi Team, I am fetching unique "ITEM" values from first sql query running on one database. Then passing those values to another sql query to fetch the corresponding values in the second database. first SQL query: select distinct a.item from price a, skus b, deps c,supp_country s where zone_id in (5, 25) and a.item = b.sku and b.dept = c.dept and a.item = s.item and s.primary_supp_ind = 'Y' and s.primary_pack_ind = 'Y' and b.dept in (7106, 1666, 1650, 1651, 1654, 1058, 4158, 4159, 489, 491, 492, 493, 495, 496, 497, 498, 499, 501, 7003, 502, 503, 7004, 450, 451, 464, 465, 455, 457, 458, 459, 460, 461, 467, 494, 7013, 448, 462, 310, 339, 7012, 7096, 200, 303, 304, 1950, 1951, 1952, 1970, 1976, 1201, 1206, 1207, 1273, 1352, 1274, 1969, 1987, 342, 343, 7107, 7098, 7095, 7104, 2101, 2117, 7107, 7098, 1990, 477, 162, 604, 900, 901, 902, 903, 904, 905, 906, 908, 910, 912, 916, 918, 7032, 919, 7110, 7093, 7101, 913, 915, 118, 119, 2701, 917) and b.js_status in ('CO'); Second SQL: WITH RankedData AS (SELECT Product_Id, BusinessUnit_Id, Price, LastUpdated, ROW_NUMBER() OVER (PARTITION BY Product_Id, BusinessUnit_Id ORDER BY LastUpdated DESC) AS RowNum FROM RETAIL.DBO.CAT_PRICE(nolock) WHERE BusinessUnit_Id IN ('zone_5', 'zone_25') AND Product_Id IN ($ITEM$) ) SELECT Product_Id, BusinessUnit_Id, Price, LastUpdated FROM RankedData WHERE RowNum = 1; When I am using map command as shown below, expected results are fetched but only 10k records as per map command limitations. But I want to to fetch all the records(around 30K) Splunk query: | dbxquery query="First SQL query" connection="ABC" |eval comma="'" |eval ITEM='comma' + 'ITEM' + 'comma'+"," |mvcombine ITEM |nomv ITEM |fields - comma |eval ITEM=rtrim(tostring(ITEM),",")| map search="| dbxquery query=\"Second SQL query" connection=\"XYZ\"" But when i am using join command as shown below to get all the results(more than 10K), I am not getting the desired output. The output only contains results from first query. I tried replacing the column name Product_Id in second sql with ITEM at all places, but still no luck. | dbxquery query="First SQL query" connection="ABC" |fields ITEM | join type=outer ITEM[search dbxquery query=\"Second SQL query" connection=\"XYZ\"" Could someone help me in understanding what is going wrong and how can i get all the matching results from second query?
Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i'm trying to ... See more...
Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i'm trying to generate a PDF from one of my dashboard on the last 24 hours, using a splunk API call. I'm using a POST request to the ".../services/pdfgen/render" endpoint. First I couldn't find any documentation on  this matter. Furthermore, even when looking at $SPLUNK_HOME/lib/python3.7/sites-packages/splunk/pdf/pdfgen_*.py  (endpoint,views,search,utils) i could'nt really understand what arguments to use to ask for the last 24 hours data. I know it should be possible because it is doable on the splunk GUI, where you can choose a time range and render according to it.  I saw something looking like time range args : et and lt, which should be earliest time and latest time, but i don't know what type of time data it is expecting an trying random things didn't get me anywhere. If you know anything on this subject please help me thank you
HI  I want to know if it is possible to have a line chart with the area between max and min value filled with color.  Example :  For the below chart , we will be having 2 more new lines ( Max and ... See more...
HI  I want to know if it is possible to have a line chart with the area between max and min value filled with color.  Example :  For the below chart , we will be having 2 more new lines ( Max and Min) and we would like to have color filed in the area between Max and Min lines.  Current Query to generate the 3 lines :  | table Start_Time CurrentWeek "CurrentWeek-1" "CurrentWeek-2"  2 more lines ( Max and Min ) needs to be added in the above linechart and fill the color between max and min. 
Hello,  Per the official AppDynamics documentation, a single node Event Service cluster is not supported for production and should be used for PoV or testing purposes only. By default Event Service... See more...
Hello,  Per the official AppDynamics documentation, a single node Event Service cluster is not supported for production and should be used for PoV or testing purposes only. By default Event Service will be installed as "production" deployment, and expects such deployment to be on, hence it will fail/crash if it is run on single node. To run a Single node, you will need to configure the events-service-api-store.yml accordingly: Comment out the following line (by putting "#" in front of the below line): cluster.initial_master_nodes:   $ { ad.es.cluster.initial_master_nodes } Add the following line under the above: discovery.type:   single-node Comment out the following line  (by putting "#" in front of the below line): discovery.seed_hosts: ${ad.es.node.unicast.hosts} Restart Event Service to apply new configs. Reference: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html#single-node-discovery If you encounter any issues when running the above configurations, please reach out to the AppDynamics support organization.
Dear all, I'm trying to search for denied actions in a subnet, regardless if it is the source or destination. I tried those without success, maybe you can help me out. Thank you! index=* AND sr... See more...
Dear all, I'm trying to search for denied actions in a subnet, regardless if it is the source or destination. I tried those without success, maybe you can help me out. Thank you! index=* AND src="192.168.1.0/24" OR dst="192.168.1.0/24" AND action=deny index=* action=deny AND src_ip=192.168.1.0/24 OR dst_ip=192.168.1.0/24 Just found it: index=* dstip="192.168.1.0/24" OR srcip="192.168.1.0/24" action=deny  
こんにちは。 自宅環境にSplunk Enterprise 7.3のトライアル版をインストールしました。 環境:Windows Server 2019 Essencial (192.168.0.x)    Active Directory インストールしたのは一台だけで、Forwarder等は使用していません。 SplunkWebにアクセスするときに以下のアドレスだと正しく接続できます... See more...
こんにちは。 自宅環境にSplunk Enterprise 7.3のトライアル版をインストールしました。 環境:Windows Server 2019 Essencial (192.168.0.x)    Active Directory インストールしたのは一台だけで、Forwarder等は使用していません。 SplunkWebにアクセスするときに以下のアドレスだと正しく接続できます。 https://localhost:8000 https://127.0.0.1:8000 しかし、本来そのサーバが持っているIPアドレスやホスト名ではアクセスできずタイムアウトしてしまいます。 https://192.168.0.x:8000 https://foo:8000 ほかのクライアントから上記のアドレスでアクセスしてもやはりSplunkの画面は表示されません。   これはなぜでしょうか。 おそらく、何かの設定が足りないためだとは思うのですが。 アドバイスをお願いします。
Hello ! I'm using Splunk_SA_CIM with ESS and I'm currently studying most of the ESCU correlation search for my own purposes. Problem : I discovered that most of my ESCU rules are creating a lot o... See more...
Hello ! I'm using Splunk_SA_CIM with ESS and I'm currently studying most of the ESCU correlation search for my own purposes. Problem : I discovered that most of my ESCU rules are creating a lot of notable events, which after investigation, were all false positives. All these rules are based on fields coming from Endpoint Data Model (for exemple, Processes.process_path), and because most of the process.path values are equal to "null", it triggers the search and create a notable event. I've already updated every app I use, and to gather Windows data, I'm using Splunk_TA_Windows add-on. Do you have any clue on how I can find where the problem is and solve it ?    
I have a saved search which is scheduled but it is not showing and not running at the scheduled time.
Is this TA still being developed and supported? https://splunkbase.splunk.com/app/4950/ I followed the 'visit site' link on the splunkbase page and couldn't see the Enterprise version advertised?